00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1996 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3257 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.140 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.141 The recommended git tool is: git 00:00:00.141 using credential 00000000-0000-0000-0000-000000000002 00:00:00.144 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.183 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.616 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.626 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.636 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.636 > git config core.sparsecheckout # timeout=10 00:00:06.646 > git read-tree -mu HEAD # timeout=10 00:00:06.661 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.681 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.682 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.786 [Pipeline] Start of Pipeline 00:00:06.800 [Pipeline] library 00:00:06.802 Loading library shm_lib@master 00:00:06.802 Library shm_lib@master is cached. Copying from home. 00:00:06.819 [Pipeline] node 00:00:06.828 Running on VM-host-WFP7 in /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:06.830 [Pipeline] { 00:00:06.841 [Pipeline] catchError 00:00:06.843 [Pipeline] { 00:00:06.855 [Pipeline] wrap 00:00:06.866 [Pipeline] { 00:00:06.873 [Pipeline] stage 00:00:06.875 [Pipeline] { (Prologue) 00:00:06.889 [Pipeline] echo 00:00:06.890 Node: VM-host-WFP7 00:00:06.895 [Pipeline] cleanWs 00:00:06.902 [WS-CLEANUP] Deleting project workspace... 00:00:06.902 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.907 [WS-CLEANUP] done 00:00:07.126 [Pipeline] setCustomBuildProperty 00:00:07.209 [Pipeline] httpRequest 00:00:07.240 [Pipeline] echo 00:00:07.241 Sorcerer 10.211.164.101 is alive 00:00:07.247 [Pipeline] httpRequest 00:00:07.251 HttpMethod: GET 00:00:07.251 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.252 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.265 Response Code: HTTP/1.1 200 OK 00:00:07.265 Success: Status code 200 is in the accepted range: 200,404 00:00:07.265 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.484 [Pipeline] sh 00:00:09.763 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.778 [Pipeline] httpRequest 00:00:09.809 [Pipeline] echo 00:00:09.811 Sorcerer 10.211.164.101 is alive 00:00:09.819 [Pipeline] httpRequest 00:00:09.824 HttpMethod: GET 00:00:09.824 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:09.825 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:09.845 Response Code: HTTP/1.1 200 OK 00:00:09.846 Success: Status code 200 is in the accepted range: 200,404 00:00:09.846 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:56.016 [Pipeline] sh 00:00:56.348 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:58.884 [Pipeline] sh 00:00:59.164 + git -C spdk log --oneline -n5 00:00:59.164 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:59.164 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:59.164 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:59.164 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:59.164 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:59.186 [Pipeline] writeFile 00:00:59.204 [Pipeline] sh 00:00:59.485 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:59.497 [Pipeline] sh 00:00:59.780 + cat autorun-spdk.conf 00:00:59.780 SPDK_TEST_UNITTEST=1 00:00:59.780 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.780 SPDK_TEST_NVME=1 00:00:59.780 SPDK_TEST_BLOCKDEV=1 00:00:59.780 SPDK_RUN_ASAN=1 00:00:59.780 SPDK_RUN_UBSAN=1 00:00:59.780 SPDK_TEST_RAID5=1 00:00:59.780 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.787 RUN_NIGHTLY=1 00:00:59.789 [Pipeline] } 00:00:59.807 [Pipeline] // stage 00:00:59.825 [Pipeline] stage 00:00:59.827 [Pipeline] { (Run VM) 00:00:59.842 [Pipeline] sh 00:01:00.126 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:00.126 + echo 'Start stage prepare_nvme.sh' 00:01:00.126 Start stage prepare_nvme.sh 00:01:00.126 + [[ -n 2 ]] 00:01:00.126 + disk_prefix=ex2 00:01:00.126 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest_2 ]] 00:01:00.126 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf ]] 00:01:00.126 + source /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf 00:01:00.126 ++ SPDK_TEST_UNITTEST=1 00:01:00.126 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.126 ++ SPDK_TEST_NVME=1 00:01:00.126 ++ SPDK_TEST_BLOCKDEV=1 00:01:00.126 ++ SPDK_RUN_ASAN=1 00:01:00.126 ++ SPDK_RUN_UBSAN=1 00:01:00.126 ++ SPDK_TEST_RAID5=1 00:01:00.126 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.126 ++ RUN_NIGHTLY=1 00:01:00.126 + cd /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:00.126 + nvme_files=() 00:01:00.126 + declare -A nvme_files 00:01:00.126 + backend_dir=/var/lib/libvirt/images/backends 00:01:00.126 + nvme_files['nvme.img']=5G 00:01:00.126 + nvme_files['nvme-cmb.img']=5G 00:01:00.126 + nvme_files['nvme-multi0.img']=4G 00:01:00.126 + nvme_files['nvme-multi1.img']=4G 00:01:00.126 + nvme_files['nvme-multi2.img']=4G 00:01:00.126 + nvme_files['nvme-openstack.img']=8G 00:01:00.126 + nvme_files['nvme-zns.img']=5G 00:01:00.126 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:00.126 + (( SPDK_TEST_FTL == 1 )) 00:01:00.126 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:00.126 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.126 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:00.126 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.126 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:00.126 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.126 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:00.126 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.126 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:00.126 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.126 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:00.126 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.126 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:00.126 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.126 + for nvme in "${!nvme_files[@]}" 00:01:00.127 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:00.386 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.386 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:00.386 + echo 'End stage prepare_nvme.sh' 00:01:00.386 End stage prepare_nvme.sh 00:01:00.398 [Pipeline] sh 00:01:00.682 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:00.682 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f ubuntu2004 00:01:00.682 00:01:00.682 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant 00:01:00.682 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk 00:01:00.682 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:00.682 HELP=0 00:01:00.682 DRY_RUN=0 00:01:00.682 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:01:00.682 NVME_DISKS_TYPE=nvme, 00:01:00.682 NVME_AUTO_CREATE=0 00:01:00.682 NVME_DISKS_NAMESPACES=, 00:01:00.682 NVME_CMB=, 00:01:00.682 NVME_PMR=, 00:01:00.682 NVME_ZNS=, 00:01:00.682 NVME_MS=, 00:01:00.682 NVME_FDP=, 00:01:00.682 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:00.682 SPDK_VAGRANT_VMCPU=10 00:01:00.682 SPDK_VAGRANT_VMRAM=12288 00:01:00.682 SPDK_VAGRANT_PROVIDER=libvirt 00:01:00.682 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:00.682 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:00.682 SPDK_OPENSTACK_NETWORK=0 00:01:00.682 VAGRANT_PACKAGE_BOX=0 00:01:00.682 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:00.682 FORCE_DISTRO=true 00:01:00.682 VAGRANT_BOX_VERSION= 00:01:00.682 EXTRA_VAGRANTFILES= 00:01:00.682 NIC_MODEL=virtio 00:01:00.682 00:01:00.682 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt' 00:01:00.682 /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:02.591 Bringing machine 'default' up with 'libvirt' provider... 00:01:03.160 ==> default: Creating image (snapshot of base box volume). 00:01:03.419 ==> default: Creating domain with the following settings... 00:01:03.419 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720617882_e6d9c5a572fa0f96f3cf 00:01:03.419 ==> default: -- Domain type: kvm 00:01:03.419 ==> default: -- Cpus: 10 00:01:03.419 ==> default: -- Feature: acpi 00:01:03.419 ==> default: -- Feature: apic 00:01:03.419 ==> default: -- Feature: pae 00:01:03.419 ==> default: -- Memory: 12288M 00:01:03.419 ==> default: -- Memory Backing: hugepages: 00:01:03.419 ==> default: -- Management MAC: 00:01:03.419 ==> default: -- Loader: 00:01:03.419 ==> default: -- Nvram: 00:01:03.419 ==> default: -- Base box: spdk/ubuntu2004 00:01:03.419 ==> default: -- Storage pool: default 00:01:03.419 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720617882_e6d9c5a572fa0f96f3cf.img (20G) 00:01:03.419 ==> default: -- Volume Cache: default 00:01:03.419 ==> default: -- Kernel: 00:01:03.419 ==> default: -- Initrd: 00:01:03.419 ==> default: -- Graphics Type: vnc 00:01:03.419 ==> default: -- Graphics Port: -1 00:01:03.420 ==> default: -- Graphics IP: 127.0.0.1 00:01:03.420 ==> default: -- Graphics Password: Not defined 00:01:03.420 ==> default: -- Video Type: cirrus 00:01:03.420 ==> default: -- Video VRAM: 9216 00:01:03.420 ==> default: -- Sound Type: 00:01:03.420 ==> default: -- Keymap: en-us 00:01:03.420 ==> default: -- TPM Path: 00:01:03.420 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:03.420 ==> default: -- Command line args: 00:01:03.420 ==> default: -> value=-device, 00:01:03.420 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:03.420 ==> default: -> value=-drive, 00:01:03.420 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:03.420 ==> default: -> value=-device, 00:01:03.420 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.420 ==> default: Creating shared folders metadata... 00:01:03.420 ==> default: Starting domain. 00:01:04.811 ==> default: Waiting for domain to get an IP address... 00:01:14.866 ==> default: Waiting for SSH to become available... 00:01:15.435 ==> default: Configuring and enabling network interfaces... 00:01:18.725 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:22.008 ==> default: Mounting SSHFS shared folder... 00:01:22.946 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:22.946 ==> default: Checking Mount.. 00:01:25.481 ==> default: Checking Mount.. 00:01:25.740 ==> default: Folder Successfully Mounted! 00:01:25.740 ==> default: Running provisioner: file... 00:01:25.999 default: ~/.gitconfig => .gitconfig 00:01:26.258 00:01:26.258 SUCCESS! 00:01:26.258 00:01:26.258 cd to /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:26.258 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:26.258 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:26.258 00:01:26.268 [Pipeline] } 00:01:26.288 [Pipeline] // stage 00:01:26.298 [Pipeline] dir 00:01:26.299 Running in /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt 00:01:26.300 [Pipeline] { 00:01:26.315 [Pipeline] catchError 00:01:26.316 [Pipeline] { 00:01:26.326 [Pipeline] sh 00:01:26.605 + vagrant ssh-config+ --host vagrant 00:01:26.605 sed -ne /^Host/,$p 00:01:26.605 + tee ssh_conf 00:01:29.231 Host vagrant 00:01:29.231 HostName 192.168.121.89 00:01:29.231 User vagrant 00:01:29.231 Port 22 00:01:29.231 UserKnownHostsFile /dev/null 00:01:29.231 StrictHostKeyChecking no 00:01:29.231 PasswordAuthentication no 00:01:29.231 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:29.231 IdentitiesOnly yes 00:01:29.231 LogLevel FATAL 00:01:29.231 ForwardAgent yes 00:01:29.231 ForwardX11 yes 00:01:29.231 00:01:29.249 [Pipeline] withEnv 00:01:29.252 [Pipeline] { 00:01:29.272 [Pipeline] sh 00:01:29.553 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:29.553 source /etc/os-release 00:01:29.553 [[ -e /image.version ]] && img=$(< /image.version) 00:01:29.553 # Minimal, systemd-like check. 00:01:29.553 if [[ -e /.dockerenv ]]; then 00:01:29.553 # Clear garbage from the node's name: 00:01:29.553 # agt-er_autotest_547-896 -> autotest_547-896 00:01:29.553 # $HOSTNAME is the actual container id 00:01:29.553 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:29.553 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:29.553 # We can assume this is a mount from a host where container is running, 00:01:29.553 # so fetch its hostname to easily identify the target swarm worker. 00:01:29.553 container="$(< /etc/hostname) ($agent)" 00:01:29.553 else 00:01:29.553 # Fallback 00:01:29.553 container=$agent 00:01:29.553 fi 00:01:29.553 fi 00:01:29.553 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:29.553 00:01:30.132 [Pipeline] } 00:01:30.156 [Pipeline] // withEnv 00:01:30.166 [Pipeline] setCustomBuildProperty 00:01:30.199 [Pipeline] stage 00:01:30.201 [Pipeline] { (Tests) 00:01:30.226 [Pipeline] sh 00:01:30.510 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:31.093 [Pipeline] sh 00:01:31.379 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:31.961 [Pipeline] timeout 00:01:31.962 Timeout set to expire in 1 hr 30 min 00:01:31.963 [Pipeline] { 00:01:31.978 [Pipeline] sh 00:01:32.256 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:33.192 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:33.205 [Pipeline] sh 00:01:33.485 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:34.051 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:34.063 [Pipeline] sh 00:01:34.340 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:34.922 [Pipeline] sh 00:01:35.202 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:35.769 ++ readlink -f spdk_repo 00:01:35.769 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:35.769 + [[ -n /home/vagrant/spdk_repo ]] 00:01:35.769 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:35.769 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:35.769 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:35.769 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:35.769 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:35.769 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:35.769 + cd /home/vagrant/spdk_repo 00:01:35.769 + source /etc/os-release 00:01:35.769 ++ NAME=Ubuntu 00:01:35.769 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:35.769 ++ ID=ubuntu 00:01:35.769 ++ ID_LIKE=debian 00:01:35.769 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:35.769 ++ VERSION_ID=20.04 00:01:35.769 ++ HOME_URL=https://www.ubuntu.com/ 00:01:35.769 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:35.769 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:35.769 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:35.769 ++ VERSION_CODENAME=focal 00:01:35.769 ++ UBUNTU_CODENAME=focal 00:01:35.769 + uname -a 00:01:35.769 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:35.769 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:35.769 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:36.028 Hugepages 00:01:36.028 node hugesize free / total 00:01:36.028 node0 1048576kB 0 / 0 00:01:36.028 node0 2048kB 0 / 0 00:01:36.028 00:01:36.028 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.028 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:36.028 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:36.028 + rm -f /tmp/spdk-ld-path 00:01:36.028 + source autorun-spdk.conf 00:01:36.028 ++ SPDK_TEST_UNITTEST=1 00:01:36.028 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.028 ++ SPDK_TEST_NVME=1 00:01:36.028 ++ SPDK_TEST_BLOCKDEV=1 00:01:36.028 ++ SPDK_RUN_ASAN=1 00:01:36.028 ++ SPDK_RUN_UBSAN=1 00:01:36.028 ++ SPDK_TEST_RAID5=1 00:01:36.028 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.028 ++ RUN_NIGHTLY=1 00:01:36.028 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.028 + [[ -n '' ]] 00:01:36.028 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:36.028 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:36.287 + for M in /var/spdk/build-*-manifest.txt 00:01:36.287 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.287 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.287 + for M in /var/spdk/build-*-manifest.txt 00:01:36.287 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.287 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.287 ++ uname 00:01:36.287 + [[ Linux == \L\i\n\u\x ]] 00:01:36.287 + sudo dmesg -T 00:01:36.287 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:36.287 + sudo dmesg --clear 00:01:36.287 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:36.287 + dmesg_pid=2386 00:01:36.287 + sudo dmesg -Tw 00:01:36.287 + [[ Ubuntu == FreeBSD ]] 00:01:36.287 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.287 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.287 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.287 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.287 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.287 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.287 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.287 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:36.287 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:36.287 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:36.287 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.287 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.287 Test configuration: 00:01:36.287 SPDK_TEST_UNITTEST=1 00:01:36.287 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.287 SPDK_TEST_NVME=1 00:01:36.287 SPDK_TEST_BLOCKDEV=1 00:01:36.287 SPDK_RUN_ASAN=1 00:01:36.287 SPDK_RUN_UBSAN=1 00:01:36.287 SPDK_TEST_RAID5=1 00:01:36.287 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.287 RUN_NIGHTLY=1 13:25:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:36.287 13:25:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.287 13:25:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.287 13:25:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.287 13:25:14 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:36.287 13:25:14 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:36.287 13:25:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:36.287 13:25:14 -- paths/export.sh@5 -- $ export PATH 00:01:36.287 13:25:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:36.287 13:25:14 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:36.287 13:25:14 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:36.287 13:25:14 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720617914.XXXXXX 00:01:36.287 13:25:14 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720617914.QWF7LJ 00:01:36.287 13:25:14 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:36.287 13:25:14 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:36.287 13:25:14 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:36.287 13:25:14 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:36.288 13:25:14 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.288 13:25:14 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:36.288 13:25:14 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:36.288 13:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.288 13:25:14 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:36.288 13:25:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.288 13:25:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.288 13:25:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:36.288 13:25:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.288 Wed Jul 10 13:25:14 UTC 2024 00:01:36.288 13:25:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.288 LTS-59-g4b94202c6 00:01:36.288 13:25:14 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:36.288 13:25:14 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:36.288 13:25:14 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:36.288 13:25:14 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:36.288 13:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.288 ************************************ 00:01:36.288 START TEST asan 00:01:36.288 ************************************ 00:01:36.288 using asan 00:01:36.288 13:25:14 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:36.288 00:01:36.288 real 0m0.000s 00:01:36.288 user 0m0.000s 00:01:36.288 sys 0m0.000s 00:01:36.288 13:25:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.288 13:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.288 ************************************ 00:01:36.288 END TEST asan 00:01:36.288 ************************************ 00:01:36.547 13:25:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.547 13:25:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.547 13:25:14 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:36.547 13:25:14 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:36.547 13:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.547 ************************************ 00:01:36.547 START TEST ubsan 00:01:36.547 ************************************ 00:01:36.547 using ubsan 00:01:36.547 13:25:14 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:36.547 00:01:36.547 real 0m0.000s 00:01:36.547 user 0m0.000s 00:01:36.547 sys 0m0.000s 00:01:36.547 13:25:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.547 13:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.547 ************************************ 00:01:36.547 END TEST ubsan 00:01:36.547 ************************************ 00:01:36.547 13:25:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:36.547 13:25:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:36.547 13:25:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:36.547 13:25:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:36.547 13:25:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.547 13:25:14 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:36.547 13:25:14 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:36.547 13:25:14 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:36.547 13:25:14 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:36.547 13:25:14 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:36.547 13:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.547 ************************************ 00:01:36.547 START TEST unittest_build 00:01:36.547 ************************************ 00:01:36.547 13:25:14 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:36.547 13:25:14 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:36.547 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:36.547 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:37.135 Using 'verbs' RDMA provider 00:01:52.604 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:10.714 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:10.714 Creating mk/config.mk...done. 00:02:10.714 Creating mk/cc.flags.mk...done. 00:02:10.714 Type 'make' to build. 00:02:10.714 13:25:47 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:10.714 make[1]: Nothing to be done for 'all'. 00:02:10.714 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.973 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.973 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.973 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.973 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.973 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.231 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.231 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.231 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.231 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.231 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.232 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.490 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.490 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.490 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.490 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.490 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.490 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.812 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.071 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.331 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.590 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.590 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.590 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.590 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.590 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.590 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.849 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.849 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.108 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.366 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.625 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.886 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.886 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.886 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.144 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.144 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.144 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.144 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.144 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.402 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.402 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.402 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.402 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.402 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.402 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.668 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.668 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.668 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.668 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.959 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.959 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.959 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.959 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.959 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.959 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.217 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.476 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.735 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.735 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.735 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.735 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.735 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.992 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.992 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.992 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.992 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.992 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.251 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.766 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.025 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.025 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.025 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.025 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.025 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.283 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.283 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.283 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.540 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.540 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.540 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.540 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:17.798 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.056 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.313 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.571 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.571 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.571 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.571 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.571 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:18.870 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.128 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.128 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.128 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.128 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.386 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.386 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.386 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.386 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.386 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.386 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.643 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.643 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.643 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.643 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.643 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.643 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:19.902 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.158 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.158 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.158 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.158 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.158 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.158 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.415 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.673 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.933 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:20.933 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.190 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.190 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.449 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.449 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.223 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.223 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.483 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.483 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.483 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.742 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.742 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.742 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.024 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.024 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.024 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.024 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.024 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.292 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.292 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.292 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.292 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.292 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.550 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.550 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.550 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.807 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.065 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.065 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.065 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.065 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.323 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.323 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.323 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.323 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.323 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.323 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.581 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.581 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.098 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.098 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.098 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.357 The Meson build system 00:02:25.357 Version: 1.4.0 00:02:25.358 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:25.358 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:25.358 Build type: native build 00:02:25.358 Program cat found: YES (/usr/bin/cat) 00:02:25.358 Project name: DPDK 00:02:25.358 Project version: 23.11.0 00:02:25.358 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:25.358 C linker for the host machine: cc ld.bfd 2.34 00:02:25.358 Host machine cpu family: x86_64 00:02:25.358 Host machine cpu: x86_64 00:02:25.358 Message: ## Building in Developer Mode ## 00:02:25.358 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.358 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:25.358 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.358 Program python3 found: YES (/usr/bin/python3) 00:02:25.358 Program cat found: YES (/usr/bin/cat) 00:02:25.358 Compiler for C supports arguments -march=native: YES 00:02:25.358 Checking for size of "void *" : 8 00:02:25.358 Checking for size of "void *" : 8 (cached) 00:02:25.358 Library m found: YES 00:02:25.358 Library numa found: YES 00:02:25.358 Has header "numaif.h" : YES 00:02:25.358 Library fdt found: NO 00:02:25.358 Library execinfo found: NO 00:02:25.358 Has header "execinfo.h" : YES 00:02:25.358 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:25.358 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.358 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.358 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.358 Run-time dependency openssl found: YES 1.1.1f 00:02:25.358 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:25.358 Library pcap found: NO 00:02:25.358 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.358 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.358 Compiler for C supports arguments -Wformat: YES 00:02:25.358 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:25.358 Compiler for C supports arguments -Wformat-security: YES 00:02:25.358 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.358 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.358 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.358 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.358 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.358 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.358 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.358 Compiler for C supports arguments -Wundef: YES 00:02:25.358 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.358 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.358 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.358 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.358 Program objdump found: YES (/usr/bin/objdump) 00:02:25.358 Compiler for C supports arguments -mavx512f: YES 00:02:25.358 Checking if "AVX512 checking" compiles: YES 00:02:25.358 Fetching value of define "__SSE4_2__" : 1 00:02:25.358 Fetching value of define "__AES__" : 1 00:02:25.358 Fetching value of define "__AVX__" : 1 00:02:25.358 Fetching value of define "__AVX2__" : 1 00:02:25.358 Fetching value of define "__AVX512BW__" : 1 00:02:25.358 Fetching value of define "__AVX512CD__" : 1 00:02:25.358 Fetching value of define "__AVX512DQ__" : 1 00:02:25.358 Fetching value of define "__AVX512F__" : 1 00:02:25.358 Fetching value of define "__AVX512VL__" : 1 00:02:25.358 Fetching value of define "__PCLMUL__" : 1 00:02:25.358 Fetching value of define "__RDRND__" : 1 00:02:25.358 Fetching value of define "__RDSEED__" : 1 00:02:25.358 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:25.358 Fetching value of define "__znver1__" : (undefined) 00:02:25.358 Fetching value of define "__znver2__" : (undefined) 00:02:25.358 Fetching value of define "__znver3__" : (undefined) 00:02:25.358 Fetching value of define "__znver4__" : (undefined) 00:02:25.358 Library asan found: YES 00:02:25.358 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.358 Message: lib/log: Defining dependency "log" 00:02:25.358 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.358 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.358 Library rt found: YES 00:02:25.358 Checking for function "getentropy" : NO 00:02:25.358 Message: lib/eal: Defining dependency "eal" 00:02:25.358 Message: lib/ring: Defining dependency "ring" 00:02:25.358 Message: lib/rcu: Defining dependency "rcu" 00:02:25.358 Message: lib/mempool: Defining dependency "mempool" 00:02:25.358 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.358 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.358 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.358 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.358 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.358 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:25.358 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:25.358 Compiler for C supports arguments -mpclmul: YES 00:02:25.358 Compiler for C supports arguments -maes: YES 00:02:25.358 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.358 Compiler for C supports arguments -mavx512bw: YES 00:02:25.358 Compiler for C supports arguments -mavx512dq: YES 00:02:25.358 Compiler for C supports arguments -mavx512vl: YES 00:02:25.358 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.358 Compiler for C supports arguments -mavx2: YES 00:02:25.358 Compiler for C supports arguments -mavx: YES 00:02:25.358 Message: lib/net: Defining dependency "net" 00:02:25.358 Message: lib/meter: Defining dependency "meter" 00:02:25.358 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.358 Message: lib/pci: Defining dependency "pci" 00:02:25.358 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.358 Message: lib/hash: Defining dependency "hash" 00:02:25.358 Message: lib/timer: Defining dependency "timer" 00:02:25.358 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.358 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.358 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.358 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.358 Message: lib/power: Defining dependency "power" 00:02:25.358 Message: lib/reorder: Defining dependency "reorder" 00:02:25.358 Message: lib/security: Defining dependency "security" 00:02:25.358 Has header "linux/userfaultfd.h" : YES 00:02:25.358 Has header "linux/vduse.h" : NO 00:02:25.358 Message: lib/vhost: Defining dependency "vhost" 00:02:25.358 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.358 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.358 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.358 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.358 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:25.358 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:25.358 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:25.358 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:25.358 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:25.358 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:25.358 Program doxygen found: YES (/usr/bin/doxygen) 00:02:25.358 Configuring doxy-api-html.conf using configuration 00:02:25.358 Configuring doxy-api-man.conf using configuration 00:02:25.358 Program mandb found: YES (/usr/bin/mandb) 00:02:25.358 Program sphinx-build found: NO 00:02:25.358 Configuring rte_build_config.h using configuration 00:02:25.358 Message: 00:02:25.358 ================= 00:02:25.358 Applications Enabled 00:02:25.358 ================= 00:02:25.358 00:02:25.358 apps: 00:02:25.358 00:02:25.358 00:02:25.358 Message: 00:02:25.358 ================= 00:02:25.358 Libraries Enabled 00:02:25.358 ================= 00:02:25.358 00:02:25.358 libs: 00:02:25.358 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:25.358 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:25.358 cryptodev, dmadev, power, reorder, security, vhost, 00:02:25.358 00:02:25.358 Message: 00:02:25.358 =============== 00:02:25.358 Drivers Enabled 00:02:25.358 =============== 00:02:25.358 00:02:25.358 common: 00:02:25.358 00:02:25.358 bus: 00:02:25.358 pci, vdev, 00:02:25.358 mempool: 00:02:25.358 ring, 00:02:25.358 dma: 00:02:25.358 00:02:25.358 net: 00:02:25.358 00:02:25.358 crypto: 00:02:25.358 00:02:25.358 compress: 00:02:25.358 00:02:25.358 vdpa: 00:02:25.358 00:02:25.358 00:02:25.358 Message: 00:02:25.358 ================= 00:02:25.358 Content Skipped 00:02:25.358 ================= 00:02:25.358 00:02:25.358 apps: 00:02:25.358 dumpcap: explicitly disabled via build config 00:02:25.358 graph: explicitly disabled via build config 00:02:25.358 pdump: explicitly disabled via build config 00:02:25.358 proc-info: explicitly disabled via build config 00:02:25.358 test-acl: explicitly disabled via build config 00:02:25.358 test-bbdev: explicitly disabled via build config 00:02:25.358 test-cmdline: explicitly disabled via build config 00:02:25.358 test-compress-perf: explicitly disabled via build config 00:02:25.358 test-crypto-perf: explicitly disabled via build config 00:02:25.358 test-dma-perf: explicitly disabled via build config 00:02:25.358 test-eventdev: explicitly disabled via build config 00:02:25.358 test-fib: explicitly disabled via build config 00:02:25.358 test-flow-perf: explicitly disabled via build config 00:02:25.358 test-gpudev: explicitly disabled via build config 00:02:25.358 test-mldev: explicitly disabled via build config 00:02:25.358 test-pipeline: explicitly disabled via build config 00:02:25.358 test-pmd: explicitly disabled via build config 00:02:25.358 test-regex: explicitly disabled via build config 00:02:25.358 test-sad: explicitly disabled via build config 00:02:25.358 test-security-perf: explicitly disabled via build config 00:02:25.358 00:02:25.358 libs: 00:02:25.358 metrics: explicitly disabled via build config 00:02:25.358 acl: explicitly disabled via build config 00:02:25.358 bbdev: explicitly disabled via build config 00:02:25.358 bitratestats: explicitly disabled via build config 00:02:25.358 bpf: explicitly disabled via build config 00:02:25.358 cfgfile: explicitly disabled via build config 00:02:25.358 distributor: explicitly disabled via build config 00:02:25.358 efd: explicitly disabled via build config 00:02:25.358 eventdev: explicitly disabled via build config 00:02:25.358 dispatcher: explicitly disabled via build config 00:02:25.359 gpudev: explicitly disabled via build config 00:02:25.359 gro: explicitly disabled via build config 00:02:25.359 gso: explicitly disabled via build config 00:02:25.359 ip_frag: explicitly disabled via build config 00:02:25.359 jobstats: explicitly disabled via build config 00:02:25.359 latencystats: explicitly disabled via build config 00:02:25.359 lpm: explicitly disabled via build config 00:02:25.359 member: explicitly disabled via build config 00:02:25.359 pcapng: explicitly disabled via build config 00:02:25.359 rawdev: explicitly disabled via build config 00:02:25.359 regexdev: explicitly disabled via build config 00:02:25.359 mldev: explicitly disabled via build config 00:02:25.359 rib: explicitly disabled via build config 00:02:25.359 sched: explicitly disabled via build config 00:02:25.359 stack: explicitly disabled via build config 00:02:25.359 ipsec: explicitly disabled via build config 00:02:25.359 pdcp: explicitly disabled via build config 00:02:25.359 fib: explicitly disabled via build config 00:02:25.359 port: explicitly disabled via build config 00:02:25.359 pdump: explicitly disabled via build config 00:02:25.359 table: explicitly disabled via build config 00:02:25.359 pipeline: explicitly disabled via build config 00:02:25.359 graph: explicitly disabled via build config 00:02:25.359 node: explicitly disabled via build config 00:02:25.359 00:02:25.359 drivers: 00:02:25.359 common/cpt: not in enabled drivers build config 00:02:25.359 common/dpaax: not in enabled drivers build config 00:02:25.359 common/iavf: not in enabled drivers build config 00:02:25.359 common/idpf: not in enabled drivers build config 00:02:25.359 common/mvep: not in enabled drivers build config 00:02:25.359 common/octeontx: not in enabled drivers build config 00:02:25.359 bus/auxiliary: not in enabled drivers build config 00:02:25.359 bus/cdx: not in enabled drivers build config 00:02:25.359 bus/dpaa: not in enabled drivers build config 00:02:25.359 bus/fslmc: not in enabled drivers build config 00:02:25.359 bus/ifpga: not in enabled drivers build config 00:02:25.359 bus/platform: not in enabled drivers build config 00:02:25.359 bus/vmbus: not in enabled drivers build config 00:02:25.359 common/cnxk: not in enabled drivers build config 00:02:25.359 common/mlx5: not in enabled drivers build config 00:02:25.359 common/nfp: not in enabled drivers build config 00:02:25.359 common/qat: not in enabled drivers build config 00:02:25.359 common/sfc_efx: not in enabled drivers build config 00:02:25.359 mempool/bucket: not in enabled drivers build config 00:02:25.359 mempool/cnxk: not in enabled drivers build config 00:02:25.359 mempool/dpaa: not in enabled drivers build config 00:02:25.359 mempool/dpaa2: not in enabled drivers build config 00:02:25.359 mempool/octeontx: not in enabled drivers build config 00:02:25.359 mempool/stack: not in enabled drivers build config 00:02:25.359 dma/cnxk: not in enabled drivers build config 00:02:25.359 dma/dpaa: not in enabled drivers build config 00:02:25.359 dma/dpaa2: not in enabled drivers build config 00:02:25.359 dma/hisilicon: not in enabled drivers build config 00:02:25.359 dma/idxd: not in enabled drivers build config 00:02:25.359 dma/ioat: not in enabled drivers build config 00:02:25.359 dma/skeleton: not in enabled drivers build config 00:02:25.359 net/af_packet: not in enabled drivers build config 00:02:25.359 net/af_xdp: not in enabled drivers build config 00:02:25.359 net/ark: not in enabled drivers build config 00:02:25.359 net/atlantic: not in enabled drivers build config 00:02:25.359 net/avp: not in enabled drivers build config 00:02:25.359 net/axgbe: not in enabled drivers build config 00:02:25.359 net/bnx2x: not in enabled drivers build config 00:02:25.359 net/bnxt: not in enabled drivers build config 00:02:25.359 net/bonding: not in enabled drivers build config 00:02:25.359 net/cnxk: not in enabled drivers build config 00:02:25.359 net/cpfl: not in enabled drivers build config 00:02:25.359 net/cxgbe: not in enabled drivers build config 00:02:25.359 net/dpaa: not in enabled drivers build config 00:02:25.359 net/dpaa2: not in enabled drivers build config 00:02:25.359 net/e1000: not in enabled drivers build config 00:02:25.359 net/ena: not in enabled drivers build config 00:02:25.359 net/enetc: not in enabled drivers build config 00:02:25.359 net/enetfec: not in enabled drivers build config 00:02:25.359 net/enic: not in enabled drivers build config 00:02:25.359 net/failsafe: not in enabled drivers build config 00:02:25.359 net/fm10k: not in enabled drivers build config 00:02:25.359 net/gve: not in enabled drivers build config 00:02:25.359 net/hinic: not in enabled drivers build config 00:02:25.359 net/hns3: not in enabled drivers build config 00:02:25.359 net/i40e: not in enabled drivers build config 00:02:25.359 net/iavf: not in enabled drivers build config 00:02:25.359 net/ice: not in enabled drivers build config 00:02:25.359 net/idpf: not in enabled drivers build config 00:02:25.359 net/igc: not in enabled drivers build config 00:02:25.359 net/ionic: not in enabled drivers build config 00:02:25.359 net/ipn3ke: not in enabled drivers build config 00:02:25.359 net/ixgbe: not in enabled drivers build config 00:02:25.359 net/mana: not in enabled drivers build config 00:02:25.359 net/memif: not in enabled drivers build config 00:02:25.359 net/mlx4: not in enabled drivers build config 00:02:25.359 net/mlx5: not in enabled drivers build config 00:02:25.359 net/mvneta: not in enabled drivers build config 00:02:25.359 net/mvpp2: not in enabled drivers build config 00:02:25.359 net/netvsc: not in enabled drivers build config 00:02:25.359 net/nfb: not in enabled drivers build config 00:02:25.359 net/nfp: not in enabled drivers build config 00:02:25.359 net/ngbe: not in enabled drivers build config 00:02:25.359 net/null: not in enabled drivers build config 00:02:25.359 net/octeontx: not in enabled drivers build config 00:02:25.359 net/octeon_ep: not in enabled drivers build config 00:02:25.359 net/pcap: not in enabled drivers build config 00:02:25.359 net/pfe: not in enabled drivers build config 00:02:25.359 net/qede: not in enabled drivers build config 00:02:25.359 net/ring: not in enabled drivers build config 00:02:25.359 net/sfc: not in enabled drivers build config 00:02:25.359 net/softnic: not in enabled drivers build config 00:02:25.359 net/tap: not in enabled drivers build config 00:02:25.359 net/thunderx: not in enabled drivers build config 00:02:25.359 net/txgbe: not in enabled drivers build config 00:02:25.359 net/vdev_netvsc: not in enabled drivers build config 00:02:25.359 net/vhost: not in enabled drivers build config 00:02:25.359 net/virtio: not in enabled drivers build config 00:02:25.359 net/vmxnet3: not in enabled drivers build config 00:02:25.359 raw/*: missing internal dependency, "rawdev" 00:02:25.359 crypto/armv8: not in enabled drivers build config 00:02:25.359 crypto/bcmfs: not in enabled drivers build config 00:02:25.359 crypto/caam_jr: not in enabled drivers build config 00:02:25.359 crypto/ccp: not in enabled drivers build config 00:02:25.359 crypto/cnxk: not in enabled drivers build config 00:02:25.359 crypto/dpaa_sec: not in enabled drivers build config 00:02:25.359 crypto/dpaa2_sec: not in enabled drivers build config 00:02:25.359 crypto/ipsec_mb: not in enabled drivers build config 00:02:25.359 crypto/mlx5: not in enabled drivers build config 00:02:25.359 crypto/mvsam: not in enabled drivers build config 00:02:25.359 crypto/nitrox: not in enabled drivers build config 00:02:25.359 crypto/null: not in enabled drivers build config 00:02:25.359 crypto/octeontx: not in enabled drivers build config 00:02:25.359 crypto/openssl: not in enabled drivers build config 00:02:25.359 crypto/scheduler: not in enabled drivers build config 00:02:25.359 crypto/uadk: not in enabled drivers build config 00:02:25.359 crypto/virtio: not in enabled drivers build config 00:02:25.359 compress/isal: not in enabled drivers build config 00:02:25.359 compress/mlx5: not in enabled drivers build config 00:02:25.359 compress/octeontx: not in enabled drivers build config 00:02:25.359 compress/zlib: not in enabled drivers build config 00:02:25.359 regex/*: missing internal dependency, "regexdev" 00:02:25.359 ml/*: missing internal dependency, "mldev" 00:02:25.359 vdpa/ifc: not in enabled drivers build config 00:02:25.359 vdpa/mlx5: not in enabled drivers build config 00:02:25.359 vdpa/nfp: not in enabled drivers build config 00:02:25.359 vdpa/sfc: not in enabled drivers build config 00:02:25.359 event/*: missing internal dependency, "eventdev" 00:02:25.359 baseband/*: missing internal dependency, "bbdev" 00:02:25.359 gpu/*: missing internal dependency, "gpudev" 00:02:25.359 00:02:25.359 00:02:25.619 Build targets in project: 85 00:02:25.619 00:02:25.619 DPDK 23.11.0 00:02:25.619 00:02:25.619 User defined options 00:02:25.619 buildtype : debug 00:02:25.619 default_library : static 00:02:25.619 libdir : lib 00:02:25.619 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.619 b_sanitize : address 00:02:25.619 c_args : -fPIC -Werror 00:02:25.619 c_link_args : 00:02:25.619 cpu_instruction_set: native 00:02:25.619 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:25.619 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:25.619 enable_docs : false 00:02:25.619 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:25.619 enable_kmods : false 00:02:25.619 tests : false 00:02:25.619 00:02:25.619 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.188 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:26.188 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:26.188 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.188 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.188 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.188 [5/264] Linking static target lib/librte_kvargs.a 00:02:26.188 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:26.188 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.447 [8/264] Linking static target lib/librte_log.a 00:02:26.447 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.447 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.447 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.447 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.447 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.447 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:26.447 [15/264] Linking static target lib/librte_telemetry.a 00:02:26.447 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.447 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.706 [18/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.706 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.706 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.706 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.706 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.706 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.706 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.706 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.706 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.966 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.966 [28/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.966 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.966 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.966 [31/264] Linking target lib/librte_log.so.24.0 00:02:26.966 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.966 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:26.966 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.966 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.966 [36/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.966 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.966 [38/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:26.966 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.966 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.967 [41/264] Linking target lib/librte_telemetry.so.24.0 00:02:26.967 [42/264] Linking target lib/librte_kvargs.so.24.0 00:02:27.234 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.234 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.234 [45/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:27.234 [46/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:27.234 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.234 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.234 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.234 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.234 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:27.234 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.494 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.494 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.494 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.494 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.494 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.494 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:27.494 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.494 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.494 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.494 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.494 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.494 [64/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.494 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.494 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.494 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.755 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.755 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.755 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.755 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.755 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.755 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.755 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.755 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.755 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.755 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.755 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.015 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.015 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.015 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.015 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.015 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.015 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.015 [85/264] Linking static target lib/librte_ring.a 00:02:28.015 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.015 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.015 [88/264] Linking static target lib/librte_eal.a 00:02:28.015 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.274 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.274 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.274 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.274 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.274 [94/264] Linking static target lib/librte_mempool.a 00:02:28.274 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.274 [96/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.274 [97/264] Linking static target lib/librte_rcu.a 00:02:28.274 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.534 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:28.534 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:28.534 [101/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.534 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.534 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.534 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.534 [105/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.794 [106/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.794 [107/264] Linking static target lib/librte_net.a 00:02:28.794 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.794 [109/264] Linking static target lib/librte_meter.a 00:02:28.794 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.794 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.794 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.794 [113/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.794 [114/264] Linking static target lib/librte_mbuf.a 00:02:28.794 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.794 [116/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.054 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.054 [118/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.313 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:29.313 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:29.313 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.313 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:29.573 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.573 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.573 [125/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.573 [126/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.573 [127/264] Linking static target lib/librte_pci.a 00:02:29.573 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.573 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.573 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:29.573 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:29.573 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.573 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.573 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.573 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.573 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.573 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.832 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.832 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.832 [140/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.832 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.832 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.832 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.832 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.832 [145/264] Linking static target lib/librte_cmdline.a 00:02:30.092 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.092 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.092 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.092 [149/264] Linking static target lib/librte_timer.a 00:02:30.092 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.092 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.352 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.352 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.352 [154/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.352 [155/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.352 [156/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.352 [157/264] Linking static target lib/librte_compressdev.a 00:02:30.611 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.611 [159/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.611 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.611 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.611 [162/264] Linking static target lib/librte_dmadev.a 00:02:30.871 [163/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.871 [164/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.871 [165/264] Linking static target lib/librte_hash.a 00:02:30.871 [166/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.871 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.871 [168/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.871 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.871 [170/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.871 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.130 [172/264] Linking static target lib/librte_ethdev.a 00:02:31.130 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.130 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:31.130 [175/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:31.130 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.130 [177/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:31.130 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.388 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.388 [180/264] Linking static target lib/librte_power.a 00:02:31.388 [181/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.388 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.388 [183/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.388 [184/264] Linking static target lib/librte_cryptodev.a 00:02:31.388 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.388 [186/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.388 [187/264] Linking static target lib/librte_reorder.a 00:02:31.646 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.646 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.646 [190/264] Linking static target lib/librte_security.a 00:02:31.906 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.906 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.906 [193/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.165 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:32.165 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.165 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.165 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.165 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.426 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.426 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.426 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.426 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.426 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.426 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.686 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.686 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.686 [207/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.686 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.686 [209/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.686 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.686 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:32.686 [212/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.686 [213/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.686 [214/264] Linking static target drivers/librte_bus_pci.a 00:02:32.686 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.945 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.945 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.945 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.945 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.204 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.204 [221/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.204 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:33.463 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.851 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.756 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.756 [226/264] Linking target lib/librte_eal.so.24.0 00:02:36.756 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:36.756 [228/264] Linking target lib/librte_ring.so.24.0 00:02:36.756 [229/264] Linking target lib/librte_meter.so.24.0 00:02:36.756 [230/264] Linking target lib/librte_dmadev.so.24.0 00:02:36.756 [231/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:36.756 [232/264] Linking target lib/librte_timer.so.24.0 00:02:36.756 [233/264] Linking target lib/librte_pci.so.24.0 00:02:37.015 [234/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:37.015 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:37.015 [236/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:37.015 [237/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:37.015 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:37.015 [239/264] Linking target lib/librte_rcu.so.24.0 00:02:37.015 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:37.015 [241/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:37.015 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:37.015 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:37.274 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:37.274 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:37.274 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:37.274 [247/264] Linking target lib/librte_net.so.24.0 00:02:37.274 [248/264] Linking target lib/librte_reorder.so.24.0 00:02:37.274 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:02:37.274 [250/264] Linking target lib/librte_compressdev.so.24.0 00:02:37.533 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:37.533 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:37.533 [253/264] Linking target lib/librte_cmdline.so.24.0 00:02:37.533 [254/264] Linking target lib/librte_hash.so.24.0 00:02:37.533 [255/264] Linking target lib/librte_security.so.24.0 00:02:37.533 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:38.908 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.908 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:38.908 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:38.908 [260/264] Linking target lib/librte_power.so.24.0 00:02:39.166 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.166 [262/264] Linking static target lib/librte_vhost.a 00:02:41.740 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.740 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:41.740 INFO: autodetecting backend as ninja 00:02:41.740 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:42.308 CC lib/ut/ut.o 00:02:42.308 CC lib/log/log_flags.o 00:02:42.308 CC lib/log/log.o 00:02:42.308 CC lib/ut_mock/mock.o 00:02:42.308 CC lib/log/log_deprecated.o 00:02:42.308 LIB libspdk_ut_mock.a 00:02:42.568 LIB libspdk_log.a 00:02:42.568 LIB libspdk_ut.a 00:02:42.568 CXX lib/trace_parser/trace.o 00:02:42.568 CC lib/util/base64.o 00:02:42.568 CC lib/util/bit_array.o 00:02:42.568 CC lib/util/cpuset.o 00:02:42.568 CC lib/util/crc32.o 00:02:42.568 CC lib/util/crc16.o 00:02:42.568 CC lib/util/crc32c.o 00:02:42.568 CC lib/ioat/ioat.o 00:02:42.568 CC lib/dma/dma.o 00:02:42.826 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.826 CC lib/util/crc32_ieee.o 00:02:42.826 CC lib/vfio_user/host/vfio_user.o 00:02:42.826 CC lib/util/crc64.o 00:02:42.826 CC lib/util/dif.o 00:02:42.826 LIB libspdk_dma.a 00:02:42.826 CC lib/util/fd.o 00:02:42.826 CC lib/util/file.o 00:02:42.826 CC lib/util/hexlify.o 00:02:42.826 CC lib/util/iov.o 00:02:42.826 CC lib/util/math.o 00:02:43.084 LIB libspdk_ioat.a 00:02:43.084 CC lib/util/pipe.o 00:02:43.084 CC lib/util/strerror_tls.o 00:02:43.084 CC lib/util/string.o 00:02:43.084 CC lib/util/uuid.o 00:02:43.084 LIB libspdk_vfio_user.a 00:02:43.084 CC lib/util/fd_group.o 00:02:43.084 CC lib/util/xor.o 00:02:43.084 CC lib/util/zipf.o 00:02:43.342 LIB libspdk_util.a 00:02:43.600 CC lib/rdma/common.o 00:02:43.600 CC lib/rdma/rdma_verbs.o 00:02:43.600 CC lib/conf/conf.o 00:02:43.600 CC lib/vmd/vmd.o 00:02:43.600 CC lib/vmd/led.o 00:02:43.600 CC lib/idxd/idxd.o 00:02:43.600 CC lib/idxd/idxd_user.o 00:02:43.600 CC lib/env_dpdk/env.o 00:02:43.600 CC lib/json/json_parse.o 00:02:43.600 LIB libspdk_trace_parser.a 00:02:43.857 CC lib/json/json_util.o 00:02:43.857 CC lib/json/json_write.o 00:02:43.857 CC lib/env_dpdk/memory.o 00:02:43.857 LIB libspdk_rdma.a 00:02:43.857 CC lib/env_dpdk/pci.o 00:02:44.147 CC lib/env_dpdk/init.o 00:02:44.147 LIB libspdk_conf.a 00:02:44.147 CC lib/env_dpdk/threads.o 00:02:44.147 CC lib/env_dpdk/pci_ioat.o 00:02:44.147 CC lib/env_dpdk/pci_virtio.o 00:02:44.147 CC lib/env_dpdk/pci_vmd.o 00:02:44.147 LIB libspdk_json.a 00:02:44.147 CC lib/env_dpdk/pci_idxd.o 00:02:44.147 CC lib/env_dpdk/sigbus_handler.o 00:02:44.147 CC lib/env_dpdk/pci_event.o 00:02:44.406 CC lib/env_dpdk/pci_dpdk.o 00:02:44.406 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.406 LIB libspdk_idxd.a 00:02:44.406 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.406 LIB libspdk_vmd.a 00:02:44.406 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.406 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.406 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.406 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.975 LIB libspdk_jsonrpc.a 00:02:44.975 CC lib/rpc/rpc.o 00:02:45.235 LIB libspdk_rpc.a 00:02:45.235 LIB libspdk_env_dpdk.a 00:02:45.494 CC lib/sock/sock_rpc.o 00:02:45.494 CC lib/sock/sock.o 00:02:45.494 CC lib/trace/trace.o 00:02:45.494 CC lib/trace/trace_flags.o 00:02:45.494 CC lib/trace/trace_rpc.o 00:02:45.494 CC lib/notify/notify.o 00:02:45.494 CC lib/notify/notify_rpc.o 00:02:45.754 LIB libspdk_notify.a 00:02:45.754 LIB libspdk_trace.a 00:02:46.013 CC lib/thread/thread.o 00:02:46.013 CC lib/thread/iobuf.o 00:02:46.013 LIB libspdk_sock.a 00:02:46.013 CC lib/nvme/nvme_ns_cmd.o 00:02:46.013 CC lib/nvme/nvme_ctrlr.o 00:02:46.013 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.013 CC lib/nvme/nvme_fabric.o 00:02:46.013 CC lib/nvme/nvme_pcie.o 00:02:46.013 CC lib/nvme/nvme_qpair.o 00:02:46.013 CC lib/nvme/nvme_ns.o 00:02:46.013 CC lib/nvme/nvme_pcie_common.o 00:02:46.271 CC lib/nvme/nvme.o 00:02:46.529 CC lib/nvme/nvme_quirks.o 00:02:46.788 CC lib/nvme/nvme_transport.o 00:02:46.788 CC lib/nvme/nvme_discovery.o 00:02:46.788 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.788 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.047 CC lib/nvme/nvme_tcp.o 00:02:47.306 CC lib/nvme/nvme_opal.o 00:02:47.306 CC lib/nvme/nvme_io_msg.o 00:02:47.306 CC lib/nvme/nvme_poll_group.o 00:02:47.306 CC lib/nvme/nvme_zns.o 00:02:47.563 CC lib/nvme/nvme_cuse.o 00:02:47.563 CC lib/nvme/nvme_vfio_user.o 00:02:47.821 CC lib/nvme/nvme_rdma.o 00:02:48.399 LIB libspdk_thread.a 00:02:48.657 CC lib/accel/accel.o 00:02:48.657 CC lib/accel/accel_rpc.o 00:02:48.657 CC lib/accel/accel_sw.o 00:02:48.657 CC lib/virtio/virtio.o 00:02:48.657 CC lib/init/json_config.o 00:02:48.657 CC lib/init/subsystem.o 00:02:48.657 CC lib/init/subsystem_rpc.o 00:02:48.657 CC lib/virtio/virtio_vhost_user.o 00:02:48.657 CC lib/blob/blobstore.o 00:02:48.916 CC lib/blob/request.o 00:02:48.916 CC lib/blob/zeroes.o 00:02:48.916 CC lib/blob/blob_bs_dev.o 00:02:48.916 CC lib/init/rpc.o 00:02:48.916 CC lib/virtio/virtio_vfio_user.o 00:02:49.174 CC lib/virtio/virtio_pci.o 00:02:49.174 LIB libspdk_init.a 00:02:49.174 LIB libspdk_nvme.a 00:02:49.174 CC lib/event/app.o 00:02:49.174 CC lib/event/scheduler_static.o 00:02:49.174 CC lib/event/reactor.o 00:02:49.174 CC lib/event/log_rpc.o 00:02:49.174 CC lib/event/app_rpc.o 00:02:49.433 LIB libspdk_virtio.a 00:02:49.691 LIB libspdk_event.a 00:02:50.262 LIB libspdk_accel.a 00:02:50.262 CC lib/bdev/scsi_nvme.o 00:02:50.262 CC lib/bdev/bdev.o 00:02:50.262 CC lib/bdev/bdev_rpc.o 00:02:50.262 CC lib/bdev/bdev_zone.o 00:02:50.262 CC lib/bdev/part.o 00:02:52.822 LIB libspdk_blob.a 00:02:52.822 CC lib/blobfs/tree.o 00:02:52.822 CC lib/blobfs/blobfs.o 00:02:52.822 CC lib/lvol/lvol.o 00:02:53.755 LIB libspdk_blobfs.a 00:02:53.755 LIB libspdk_bdev.a 00:02:53.755 LIB libspdk_lvol.a 00:02:53.755 CC lib/nbd/nbd.o 00:02:53.755 CC lib/nbd/nbd_rpc.o 00:02:53.755 CC lib/nvmf/ctrlr_bdev.o 00:02:53.755 CC lib/nvmf/ctrlr_discovery.o 00:02:53.755 CC lib/nvmf/ctrlr.o 00:02:53.755 CC lib/nvmf/subsystem.o 00:02:53.755 CC lib/nvmf/nvmf.o 00:02:53.755 CC lib/nvmf/nvmf_rpc.o 00:02:53.755 CC lib/scsi/dev.o 00:02:53.755 CC lib/ftl/ftl_core.o 00:02:54.323 CC lib/ftl/ftl_init.o 00:02:54.323 CC lib/scsi/lun.o 00:02:54.323 CC lib/scsi/port.o 00:02:54.582 CC lib/ftl/ftl_layout.o 00:02:54.582 LIB libspdk_nbd.a 00:02:54.582 CC lib/scsi/scsi.o 00:02:54.582 CC lib/scsi/scsi_bdev.o 00:02:54.582 CC lib/nvmf/transport.o 00:02:54.582 CC lib/nvmf/tcp.o 00:02:54.840 CC lib/nvmf/rdma.o 00:02:54.840 CC lib/scsi/scsi_pr.o 00:02:55.099 CC lib/ftl/ftl_debug.o 00:02:55.099 CC lib/scsi/scsi_rpc.o 00:02:55.099 CC lib/scsi/task.o 00:02:55.099 CC lib/ftl/ftl_io.o 00:02:55.099 CC lib/ftl/ftl_sb.o 00:02:55.357 CC lib/ftl/ftl_l2p.o 00:02:55.357 CC lib/ftl/ftl_l2p_flat.o 00:02:55.357 CC lib/ftl/ftl_nv_cache.o 00:02:55.358 LIB libspdk_scsi.a 00:02:55.358 CC lib/ftl/ftl_band.o 00:02:55.358 CC lib/ftl/ftl_band_ops.o 00:02:55.616 CC lib/ftl/ftl_writer.o 00:02:55.616 CC lib/ftl/ftl_rq.o 00:02:55.616 CC lib/ftl/ftl_reloc.o 00:02:55.616 CC lib/ftl/ftl_l2p_cache.o 00:02:55.875 CC lib/ftl/ftl_p2l.o 00:02:55.875 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.875 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.133 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.133 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.133 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.133 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.133 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.391 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.391 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.391 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.391 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.391 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.391 CC lib/iscsi/conn.o 00:02:56.391 CC lib/iscsi/init_grp.o 00:02:56.649 CC lib/iscsi/iscsi.o 00:02:56.649 CC lib/iscsi/md5.o 00:02:56.649 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.906 CC lib/vhost/vhost.o 00:02:56.906 CC lib/iscsi/param.o 00:02:56.906 CC lib/iscsi/portal_grp.o 00:02:56.906 CC lib/iscsi/tgt_node.o 00:02:56.906 CC lib/ftl/utils/ftl_conf.o 00:02:57.164 CC lib/ftl/utils/ftl_md.o 00:02:57.164 CC lib/ftl/utils/ftl_mempool.o 00:02:57.164 CC lib/iscsi/iscsi_subsystem.o 00:02:57.164 CC lib/iscsi/iscsi_rpc.o 00:02:57.164 CC lib/iscsi/task.o 00:02:57.423 CC lib/ftl/utils/ftl_bitmap.o 00:02:57.423 CC lib/vhost/vhost_rpc.o 00:02:57.423 CC lib/ftl/utils/ftl_property.o 00:02:57.423 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:57.423 LIB libspdk_nvmf.a 00:02:57.423 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:57.423 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:57.681 CC lib/vhost/vhost_scsi.o 00:02:57.681 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:57.681 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:57.681 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:57.681 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:57.681 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:57.681 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:57.940 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:57.940 CC lib/ftl/base/ftl_base_dev.o 00:02:57.940 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.940 CC lib/ftl/ftl_trace.o 00:02:57.940 CC lib/vhost/vhost_blk.o 00:02:57.940 CC lib/vhost/rte_vhost_user.o 00:02:58.199 LIB libspdk_ftl.a 00:02:58.457 LIB libspdk_iscsi.a 00:02:59.022 LIB libspdk_vhost.a 00:02:59.282 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.282 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.282 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:59.282 CC module/blob/bdev/blob_bdev.o 00:02:59.282 CC module/sock/posix/posix.o 00:02:59.282 CC module/accel/ioat/accel_ioat.o 00:02:59.282 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.282 CC module/accel/error/accel_error.o 00:02:59.282 CC module/accel/dsa/accel_dsa.o 00:02:59.282 CC module/accel/iaa/accel_iaa.o 00:02:59.282 LIB libspdk_env_dpdk_rpc.a 00:02:59.282 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.282 LIB libspdk_scheduler_gscheduler.a 00:02:59.282 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.541 LIB libspdk_scheduler_dynamic.a 00:02:59.541 CC module/accel/error/accel_error_rpc.o 00:02:59.541 LIB libspdk_scheduler_dpdk_governor.a 00:02:59.541 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.541 LIB libspdk_accel_iaa.a 00:02:59.541 LIB libspdk_blob_bdev.a 00:02:59.541 LIB libspdk_accel_dsa.a 00:02:59.799 LIB libspdk_accel_ioat.a 00:02:59.799 LIB libspdk_accel_error.a 00:02:59.799 CC module/bdev/gpt/gpt.o 00:02:59.799 CC module/bdev/delay/vbdev_delay.o 00:02:59.799 CC module/bdev/error/vbdev_error.o 00:02:59.799 CC module/bdev/lvol/vbdev_lvol.o 00:02:59.799 CC module/bdev/malloc/bdev_malloc.o 00:02:59.799 CC module/blobfs/bdev/blobfs_bdev.o 00:02:59.799 CC module/bdev/null/bdev_null.o 00:02:59.799 CC module/bdev/passthru/vbdev_passthru.o 00:02:59.799 CC module/bdev/nvme/bdev_nvme.o 00:03:00.056 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.056 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.056 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.056 CC module/bdev/null/bdev_null_rpc.o 00:03:00.056 LIB libspdk_sock_posix.a 00:03:00.056 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.315 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.315 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.315 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.315 LIB libspdk_blobfs_bdev.a 00:03:00.315 LIB libspdk_bdev_error.a 00:03:00.315 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.315 LIB libspdk_bdev_gpt.a 00:03:00.315 LIB libspdk_bdev_null.a 00:03:00.315 CC module/bdev/raid/bdev_raid.o 00:03:00.315 LIB libspdk_bdev_passthru.a 00:03:00.315 LIB libspdk_bdev_delay.a 00:03:00.315 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.315 CC module/bdev/split/vbdev_split.o 00:03:00.315 LIB libspdk_bdev_malloc.a 00:03:00.315 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.315 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.573 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.574 CC module/bdev/aio/bdev_aio.o 00:03:00.574 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.574 CC module/bdev/raid/raid0.o 00:03:00.574 LIB libspdk_bdev_lvol.a 00:03:00.574 LIB libspdk_bdev_split.a 00:03:00.574 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.832 CC module/bdev/ftl/bdev_ftl.o 00:03:00.832 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.832 LIB libspdk_bdev_zone_block.a 00:03:00.832 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.832 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.832 CC module/bdev/raid/raid1.o 00:03:00.832 LIB libspdk_bdev_aio.a 00:03:00.832 CC module/bdev/raid/concat.o 00:03:00.832 CC module/bdev/nvme/nvme_rpc.o 00:03:01.090 CC module/bdev/nvme/bdev_mdns_client.o 00:03:01.090 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:01.090 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:01.090 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:01.090 LIB libspdk_bdev_ftl.a 00:03:01.090 CC module/bdev/raid/raid5f.o 00:03:01.090 CC module/bdev/nvme/vbdev_opal.o 00:03:01.090 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:01.091 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:01.349 LIB libspdk_bdev_iscsi.a 00:03:01.619 LIB libspdk_bdev_virtio.a 00:03:01.619 LIB libspdk_bdev_raid.a 00:03:02.642 LIB libspdk_bdev_nvme.a 00:03:02.900 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:02.900 CC module/event/subsystems/iobuf/iobuf.o 00:03:02.900 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:02.900 CC module/event/subsystems/scheduler/scheduler.o 00:03:02.900 CC module/event/subsystems/vmd/vmd.o 00:03:02.900 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:02.900 CC module/event/subsystems/sock/sock.o 00:03:03.158 LIB libspdk_event_vhost_blk.a 00:03:03.158 LIB libspdk_event_scheduler.a 00:03:03.158 LIB libspdk_event_sock.a 00:03:03.158 LIB libspdk_event_vmd.a 00:03:03.158 LIB libspdk_event_iobuf.a 00:03:03.416 CC module/event/subsystems/accel/accel.o 00:03:03.416 LIB libspdk_event_accel.a 00:03:03.674 CC module/event/subsystems/bdev/bdev.o 00:03:03.932 LIB libspdk_event_bdev.a 00:03:04.189 CC module/event/subsystems/scsi/scsi.o 00:03:04.189 CC module/event/subsystems/nbd/nbd.o 00:03:04.189 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.189 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.189 LIB libspdk_event_nbd.a 00:03:04.189 LIB libspdk_event_scsi.a 00:03:04.448 LIB libspdk_event_nvmf.a 00:03:04.448 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.448 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.448 LIB libspdk_event_vhost_scsi.a 00:03:04.706 LIB libspdk_event_iscsi.a 00:03:04.706 CXX app/trace/trace.o 00:03:04.965 CC examples/accel/perf/accel_perf.o 00:03:04.965 CC examples/ioat/perf/perf.o 00:03:04.965 CC examples/nvme/hello_world/hello_world.o 00:03:04.965 CC test/bdev/bdevio/bdevio.o 00:03:04.965 CC examples/blob/hello_world/hello_blob.o 00:03:04.965 CC test/accel/dif/dif.o 00:03:04.965 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.965 CC test/blobfs/mkfs/mkfs.o 00:03:04.965 CC test/app/bdev_svc/bdev_svc.o 00:03:04.965 LINK ioat_perf 00:03:05.224 LINK hello_blob 00:03:05.224 LINK hello_world 00:03:05.224 LINK mkfs 00:03:05.224 LINK hello_bdev 00:03:05.224 LINK bdev_svc 00:03:05.224 LINK spdk_trace 00:03:05.224 LINK bdevio 00:03:05.483 LINK dif 00:03:05.483 LINK accel_perf 00:03:05.741 CC examples/ioat/verify/verify.o 00:03:05.741 CC app/trace_record/trace_record.o 00:03:05.741 LINK verify 00:03:05.999 LINK spdk_trace_record 00:03:05.999 CC examples/nvme/reconnect/reconnect.o 00:03:06.257 CC app/nvmf_tgt/nvmf_main.o 00:03:06.515 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.515 LINK reconnect 00:03:06.515 LINK nvmf_tgt 00:03:06.515 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.082 LINK nvme_fuzz 00:03:07.339 LINK bdevperf 00:03:07.598 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.856 CC examples/blob/cli/blobcli.o 00:03:07.856 CC examples/nvme/arbitration/arbitration.o 00:03:07.856 CC examples/nvme/hotplug/hotplug.o 00:03:08.115 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.115 LINK nvme_manage 00:03:08.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:08.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:08.115 LINK hotplug 00:03:08.115 LINK arbitration 00:03:08.374 LINK blobcli 00:03:08.374 CC app/iscsi_tgt/iscsi_tgt.o 00:03:08.632 LINK vhost_fuzz 00:03:08.633 LINK iscsi_tgt 00:03:08.633 CC app/spdk_tgt/spdk_tgt.o 00:03:08.891 LINK spdk_tgt 00:03:09.150 CC app/spdk_lspci/spdk_lspci.o 00:03:09.150 CC app/spdk_nvme_perf/perf.o 00:03:09.150 LINK spdk_lspci 00:03:09.411 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:09.411 CC test/app/histogram_perf/histogram_perf.o 00:03:09.411 LINK histogram_perf 00:03:09.411 LINK cmb_copy 00:03:09.978 LINK iscsi_fuzz 00:03:09.978 CC test/app/jsoncat/jsoncat.o 00:03:09.978 LINK jsoncat 00:03:09.978 CC test/app/stub/stub.o 00:03:10.241 TEST_HEADER include/spdk/accel_module.h 00:03:10.241 TEST_HEADER include/spdk/bit_pool.h 00:03:10.241 TEST_HEADER include/spdk/ioat.h 00:03:10.241 TEST_HEADER include/spdk/blobfs.h 00:03:10.241 TEST_HEADER include/spdk/notify.h 00:03:10.241 TEST_HEADER include/spdk/pipe.h 00:03:10.241 LINK spdk_nvme_perf 00:03:10.241 TEST_HEADER include/spdk/accel.h 00:03:10.241 TEST_HEADER include/spdk/file.h 00:03:10.242 TEST_HEADER include/spdk/version.h 00:03:10.242 TEST_HEADER include/spdk/trace_parser.h 00:03:10.242 TEST_HEADER include/spdk/opal_spec.h 00:03:10.242 TEST_HEADER include/spdk/uuid.h 00:03:10.242 TEST_HEADER include/spdk/likely.h 00:03:10.242 TEST_HEADER include/spdk/dif.h 00:03:10.242 TEST_HEADER include/spdk/memory.h 00:03:10.242 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.242 TEST_HEADER include/spdk/dma.h 00:03:10.242 TEST_HEADER include/spdk/nbd.h 00:03:10.242 TEST_HEADER include/spdk/conf.h 00:03:10.242 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.242 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.242 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.242 TEST_HEADER include/spdk/mmio.h 00:03:10.242 TEST_HEADER include/spdk/json.h 00:03:10.242 TEST_HEADER include/spdk/opal.h 00:03:10.242 TEST_HEADER include/spdk/bdev.h 00:03:10.242 TEST_HEADER include/spdk/base64.h 00:03:10.242 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.242 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.242 TEST_HEADER include/spdk/fd.h 00:03:10.242 TEST_HEADER include/spdk/barrier.h 00:03:10.242 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.242 TEST_HEADER include/spdk/zipf.h 00:03:10.242 TEST_HEADER include/spdk/nvmf.h 00:03:10.242 TEST_HEADER include/spdk/queue.h 00:03:10.242 TEST_HEADER include/spdk/xor.h 00:03:10.242 TEST_HEADER include/spdk/cpuset.h 00:03:10.242 TEST_HEADER include/spdk/thread.h 00:03:10.242 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.242 TEST_HEADER include/spdk/fd_group.h 00:03:10.242 TEST_HEADER include/spdk/tree.h 00:03:10.242 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.242 TEST_HEADER include/spdk/crc64.h 00:03:10.242 TEST_HEADER include/spdk/assert.h 00:03:10.242 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.242 TEST_HEADER include/spdk/endian.h 00:03:10.242 TEST_HEADER include/spdk/pci_ids.h 00:03:10.242 TEST_HEADER include/spdk/log.h 00:03:10.242 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.242 TEST_HEADER include/spdk/ftl.h 00:03:10.242 TEST_HEADER include/spdk/config.h 00:03:10.242 TEST_HEADER include/spdk/vhost.h 00:03:10.242 TEST_HEADER include/spdk/bdev_module.h 00:03:10.242 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.242 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.242 TEST_HEADER include/spdk/crc16.h 00:03:10.242 TEST_HEADER include/spdk/nvme.h 00:03:10.242 TEST_HEADER include/spdk/stdinc.h 00:03:10.242 TEST_HEADER include/spdk/scsi.h 00:03:10.242 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.242 TEST_HEADER include/spdk/idxd.h 00:03:10.242 TEST_HEADER include/spdk/hexlify.h 00:03:10.242 TEST_HEADER include/spdk/reduce.h 00:03:10.242 TEST_HEADER include/spdk/crc32.h 00:03:10.242 TEST_HEADER include/spdk/init.h 00:03:10.242 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.242 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.242 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.242 TEST_HEADER include/spdk/util.h 00:03:10.242 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.242 TEST_HEADER include/spdk/env.h 00:03:10.242 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.242 TEST_HEADER include/spdk/lvol.h 00:03:10.242 TEST_HEADER include/spdk/histogram_data.h 00:03:10.242 LINK stub 00:03:10.242 TEST_HEADER include/spdk/event.h 00:03:10.242 TEST_HEADER include/spdk/trace.h 00:03:10.242 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.242 TEST_HEADER include/spdk/string.h 00:03:10.242 TEST_HEADER include/spdk/ublk.h 00:03:10.242 TEST_HEADER include/spdk/bit_array.h 00:03:10.242 TEST_HEADER include/spdk/scheduler.h 00:03:10.242 TEST_HEADER include/spdk/blob.h 00:03:10.242 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.242 TEST_HEADER include/spdk/sock.h 00:03:10.242 TEST_HEADER include/spdk/vmd.h 00:03:10.242 TEST_HEADER include/spdk/rpc.h 00:03:10.242 CXX test/cpp_headers/accel_module.o 00:03:10.242 CXX test/cpp_headers/bit_pool.o 00:03:10.527 CXX test/cpp_headers/ioat.o 00:03:10.527 CC app/spdk_nvme_identify/identify.o 00:03:10.805 CC examples/nvme/abort/abort.o 00:03:10.805 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:10.805 CXX test/cpp_headers/blobfs.o 00:03:11.065 CXX test/cpp_headers/notify.o 00:03:11.065 CXX test/cpp_headers/pipe.o 00:03:11.065 LINK pmr_persistence 00:03:11.065 LINK abort 00:03:11.065 CXX test/cpp_headers/accel.o 00:03:11.632 CXX test/cpp_headers/file.o 00:03:11.632 CC examples/sock/hello_world/hello_sock.o 00:03:11.632 CC test/dma/test_dma/test_dma.o 00:03:11.632 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.632 CXX test/cpp_headers/version.o 00:03:11.890 CXX test/cpp_headers/trace_parser.o 00:03:11.890 LINK hello_sock 00:03:11.890 CC examples/util/zipf/zipf.o 00:03:11.890 LINK spdk_nvme_identify 00:03:11.890 CC examples/nvmf/nvmf/nvmf.o 00:03:11.890 LINK lsvmd 00:03:11.890 CXX test/cpp_headers/opal_spec.o 00:03:12.148 LINK zipf 00:03:12.148 LINK test_dma 00:03:12.148 CXX test/cpp_headers/uuid.o 00:03:12.406 LINK nvmf 00:03:12.406 CC examples/idxd/perf/perf.o 00:03:12.406 CC examples/thread/thread/thread_ex.o 00:03:12.406 CXX test/cpp_headers/likely.o 00:03:12.406 CXX test/cpp_headers/dif.o 00:03:12.663 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.663 LINK thread 00:03:12.663 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.663 LINK idxd_perf 00:03:12.663 CXX test/cpp_headers/memory.o 00:03:12.663 CC app/spdk_top/spdk_top.o 00:03:12.663 LINK interrupt_tgt 00:03:12.920 LINK spdk_nvme_discover 00:03:12.920 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.920 CC app/vhost/vhost.o 00:03:13.179 CC examples/vmd/led/led.o 00:03:13.179 CXX test/cpp_headers/dma.o 00:03:13.179 LINK vhost 00:03:13.179 CC app/spdk_dd/spdk_dd.o 00:03:13.436 LINK led 00:03:13.436 CXX test/cpp_headers/nbd.o 00:03:13.436 CXX test/cpp_headers/conf.o 00:03:13.694 CXX test/cpp_headers/env_dpdk.o 00:03:13.694 LINK spdk_dd 00:03:13.694 LINK spdk_top 00:03:13.957 CXX test/cpp_headers/nvmf_spec.o 00:03:13.957 CC app/fio/nvme/fio_plugin.o 00:03:14.218 CXX test/cpp_headers/iscsi_spec.o 00:03:14.218 CXX test/cpp_headers/mmio.o 00:03:14.218 CXX test/cpp_headers/json.o 00:03:14.476 CXX test/cpp_headers/opal.o 00:03:14.734 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.734 LINK spdk_nvme 00:03:14.734 CC test/event/event_perf/event_perf.o 00:03:14.734 CC test/lvol/esnap/esnap.o 00:03:14.734 CXX test/cpp_headers/bdev.o 00:03:14.734 CXX test/cpp_headers/base64.o 00:03:14.993 LINK event_perf 00:03:14.993 LINK mem_callbacks 00:03:14.993 CC test/nvme/aer/aer.o 00:03:15.251 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.251 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.508 CC test/nvme/reset/reset.o 00:03:15.508 LINK aer 00:03:15.508 CXX test/cpp_headers/fd.o 00:03:15.508 CC test/env/vtophys/vtophys.o 00:03:15.508 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.765 LINK env_dpdk_post_init 00:03:15.765 CXX test/cpp_headers/barrier.o 00:03:15.765 LINK vtophys 00:03:15.765 LINK reset 00:03:15.765 CC test/event/reactor/reactor.o 00:03:15.765 CXX test/cpp_headers/scsi_spec.o 00:03:16.024 LINK reactor 00:03:16.282 CXX test/cpp_headers/zipf.o 00:03:16.282 CC test/env/memory/memory_ut.o 00:03:16.282 CXX test/cpp_headers/nvmf.o 00:03:16.282 CC app/fio/bdev/fio_plugin.o 00:03:16.540 CC test/env/pci/pci_ut.o 00:03:16.540 CXX test/cpp_headers/queue.o 00:03:16.798 CXX test/cpp_headers/xor.o 00:03:16.798 CC test/nvme/sgl/sgl.o 00:03:16.798 CXX test/cpp_headers/cpuset.o 00:03:16.798 CC test/event/reactor_perf/reactor_perf.o 00:03:16.798 LINK spdk_bdev 00:03:17.055 CXX test/cpp_headers/thread.o 00:03:17.055 CXX test/cpp_headers/bdev_zone.o 00:03:17.055 CC test/nvme/e2edp/nvme_dp.o 00:03:17.055 LINK reactor_perf 00:03:17.055 LINK sgl 00:03:17.055 LINK memory_ut 00:03:17.055 LINK pci_ut 00:03:17.055 CC test/nvme/overhead/overhead.o 00:03:17.055 CXX test/cpp_headers/fd_group.o 00:03:17.312 CC test/event/app_repeat/app_repeat.o 00:03:17.312 CXX test/cpp_headers/tree.o 00:03:17.313 LINK nvme_dp 00:03:17.313 CXX test/cpp_headers/blob_bdev.o 00:03:17.313 LINK app_repeat 00:03:17.570 CC test/event/scheduler/scheduler.o 00:03:17.570 LINK overhead 00:03:17.570 CXX test/cpp_headers/crc64.o 00:03:17.830 LINK scheduler 00:03:17.830 CXX test/cpp_headers/assert.o 00:03:17.830 CC test/rpc_client/rpc_client_test.o 00:03:17.830 CXX test/cpp_headers/nvme_spec.o 00:03:18.100 CC test/nvme/err_injection/err_injection.o 00:03:18.100 CXX test/cpp_headers/endian.o 00:03:18.100 LINK rpc_client_test 00:03:18.100 CXX test/cpp_headers/pci_ids.o 00:03:18.361 LINK err_injection 00:03:18.361 CC test/nvme/startup/startup.o 00:03:18.361 CXX test/cpp_headers/log.o 00:03:18.361 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.361 CC test/nvme/reserve/reserve.o 00:03:18.620 CXX test/cpp_headers/ftl.o 00:03:18.620 LINK startup 00:03:18.620 CC test/nvme/simple_copy/simple_copy.o 00:03:18.620 CC test/thread/poller_perf/poller_perf.o 00:03:18.620 LINK reserve 00:03:18.620 CC test/thread/lock/spdk_lock.o 00:03:18.620 CXX test/cpp_headers/config.o 00:03:18.620 CXX test/cpp_headers/vhost.o 00:03:18.878 LINK poller_perf 00:03:18.878 LINK simple_copy 00:03:18.878 CXX test/cpp_headers/bdev_module.o 00:03:19.137 CXX test/cpp_headers/nvme_intel.o 00:03:19.395 CXX test/cpp_headers/idxd_spec.o 00:03:19.395 CXX test/cpp_headers/crc16.o 00:03:19.652 CXX test/cpp_headers/nvme.o 00:03:19.652 CXX test/cpp_headers/stdinc.o 00:03:19.652 CXX test/cpp_headers/scsi.o 00:03:19.911 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.169 LINK esnap 00:03:20.169 CC test/nvme/connect_stress/connect_stress.o 00:03:20.429 CXX test/cpp_headers/idxd.o 00:03:20.429 CXX test/cpp_headers/hexlify.o 00:03:20.429 CXX test/cpp_headers/reduce.o 00:03:20.429 CXX test/cpp_headers/crc32.o 00:03:20.429 CXX test/cpp_headers/init.o 00:03:20.429 CC test/nvme/boot_partition/boot_partition.o 00:03:20.429 CXX test/cpp_headers/nvmf_transport.o 00:03:20.429 CXX test/cpp_headers/nvme_zns.o 00:03:20.429 LINK spdk_lock 00:03:20.688 LINK connect_stress 00:03:20.688 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.688 CXX test/cpp_headers/util.o 00:03:20.688 CC test/nvme/compliance/nvme_compliance.o 00:03:20.688 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.688 LINK boot_partition 00:03:20.688 CXX test/cpp_headers/jsonrpc.o 00:03:20.688 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:20.948 CXX test/cpp_headers/env.o 00:03:20.948 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:20.948 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:20.948 LINK fused_ordering 00:03:20.948 LINK histogram_ut 00:03:20.948 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.948 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.948 LINK nvme_compliance 00:03:21.207 CXX test/cpp_headers/lvol.o 00:03:21.207 LINK doorbell_aers 00:03:21.207 CXX test/cpp_headers/histogram_data.o 00:03:21.207 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:21.467 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:21.467 CXX test/cpp_headers/event.o 00:03:21.467 CXX test/cpp_headers/trace.o 00:03:21.726 CXX test/cpp_headers/ioat_spec.o 00:03:21.726 LINK scsi_nvme_ut 00:03:21.986 CC test/nvme/fdp/fdp.o 00:03:21.986 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:21.986 CXX test/cpp_headers/string.o 00:03:21.986 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:22.244 CXX test/cpp_headers/ublk.o 00:03:22.244 CXX test/cpp_headers/bit_array.o 00:03:22.244 CXX test/cpp_headers/scheduler.o 00:03:22.503 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:22.503 LINK fdp 00:03:22.503 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:22.503 CXX test/cpp_headers/blob.o 00:03:22.503 CC test/nvme/cuse/cuse.o 00:03:22.762 LINK gpt_ut 00:03:22.762 CXX test/cpp_headers/gpt_spec.o 00:03:22.762 CXX test/cpp_headers/sock.o 00:03:23.020 CXX test/cpp_headers/vmd.o 00:03:23.020 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:23.277 CXX test/cpp_headers/rpc.o 00:03:23.535 LINK vbdev_lvol_ut 00:03:23.535 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:23.535 LINK cuse 00:03:23.535 LINK accel_ut 00:03:23.535 LINK blob_bdev_ut 00:03:23.793 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:23.793 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:23.793 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:23.793 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:24.052 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:24.052 LINK tree_ut 00:03:24.310 LINK bdev_raid_sb_ut 00:03:24.310 LINK concat_ut 00:03:24.310 LINK raid1_ut 00:03:24.310 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:24.568 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:24.568 LINK bdev_raid_ut 00:03:24.568 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:24.828 LINK blobfs_bdev_ut 00:03:24.828 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:24.828 CC test/unit/lib/event/app.c/app_ut.o 00:03:25.086 LINK part_ut 00:03:25.086 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:25.086 LINK raid5f_ut 00:03:25.086 LINK dma_ut 00:03:25.368 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:25.642 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:25.642 LINK app_ut 00:03:25.642 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:25.642 LINK bdev_zone_ut 00:03:25.642 LINK blobfs_async_ut 00:03:25.901 LINK blobfs_sync_ut 00:03:25.901 LINK ioat_ut 00:03:25.901 LINK reactor_ut 00:03:25.901 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:25.901 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:26.159 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:26.159 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:26.159 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:26.159 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:26.417 LINK vbdev_zone_block_ut 00:03:26.417 LINK bdev_ut 00:03:26.688 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:26.688 LINK init_grp_ut 00:03:26.688 LINK json_util_ut 00:03:26.688 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:26.945 LINK bdev_ut 00:03:26.945 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:26.945 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:26.945 LINK json_write_ut 00:03:27.203 LINK conn_ut 00:03:27.203 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:27.203 LINK param_ut 00:03:27.203 CC test/unit/lib/log/log.c/log_ut.o 00:03:27.462 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:27.462 LINK portal_grp_ut 00:03:27.462 LINK log_ut 00:03:27.721 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:27.721 LINK jsonrpc_server_ut 00:03:27.721 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:27.721 LINK tgt_node_ut 00:03:27.978 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:27.978 LINK notify_ut 00:03:27.978 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:28.235 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:28.235 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:28.235 LINK scsi_ut 00:03:28.494 LINK dev_ut 00:03:28.494 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:28.752 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:28.752 LINK json_parse_ut 00:03:28.752 LINK lun_ut 00:03:29.010 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:29.010 LINK scsi_pr_ut 00:03:29.010 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:29.275 LINK nvme_ut 00:03:29.275 LINK lvol_ut 00:03:29.275 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:29.541 LINK iscsi_ut 00:03:29.541 LINK scsi_bdev_ut 00:03:29.541 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:29.799 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:29.799 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:29.799 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:30.057 LINK base64_ut 00:03:30.057 LINK posix_ut 00:03:30.315 LINK bit_array_ut 00:03:30.315 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:30.315 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:30.315 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:30.573 LINK sock_ut 00:03:30.573 LINK cpuset_ut 00:03:30.573 LINK crc16_ut 00:03:30.573 LINK crc32_ieee_ut 00:03:30.573 LINK bdev_nvme_ut 00:03:30.573 LINK iobuf_ut 00:03:30.573 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:30.832 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:30.832 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:30.832 LINK crc32c_ut 00:03:30.832 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:30.832 LINK crc64_ut 00:03:30.832 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:30.832 CC test/unit/lib/util/math.c/math_ut.o 00:03:31.091 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:31.091 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:31.091 LINK math_ut 00:03:31.091 LINK pci_event_ut 00:03:31.091 LINK iov_ut 00:03:31.349 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:31.349 LINK blob_ut 00:03:31.349 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:31.349 LINK subsystem_ut 00:03:31.349 LINK rpc_ut 00:03:31.349 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:31.606 LINK thread_ut 00:03:31.606 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:31.606 CC test/unit/lib/util/string.c/string_ut.o 00:03:31.606 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:31.606 LINK pipe_ut 00:03:31.865 LINK dif_ut 00:03:31.865 LINK idxd_user_ut 00:03:31.865 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:31.865 LINK xor_ut 00:03:31.865 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:32.124 LINK string_ut 00:03:32.124 LINK tcp_ut 00:03:32.124 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:32.124 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:32.384 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:32.384 LINK ftl_l2p_ut 00:03:32.384 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:32.384 LINK idxd_ut 00:03:32.384 LINK common_ut 00:03:32.384 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:32.643 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:32.643 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:32.643 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:32.643 LINK nvme_ctrlr_ut 00:03:32.902 LINK ftl_bitmap_ut 00:03:32.902 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:33.248 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:33.248 LINK ftl_io_ut 00:03:33.248 LINK vhost_ut 00:03:33.532 LINK ftl_mempool_ut 00:03:33.532 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:33.532 LINK ctrlr_bdev_ut 00:03:33.532 LINK nvme_ctrlr_cmd_ut 00:03:33.532 LINK ftl_mngt_ut 00:03:33.532 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:33.532 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:33.806 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:33.806 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:33.806 LINK ftl_band_ut 00:03:33.806 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:34.069 LINK ctrlr_discovery_ut 00:03:34.069 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:34.069 LINK subsystem_ut 00:03:34.327 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:34.586 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:34.845 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:34.845 LINK nvmf_ut 00:03:34.845 LINK nvme_ns_ut 00:03:34.845 LINK ftl_layout_upgrade_ut 00:03:34.845 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:35.105 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:35.105 LINK ctrlr_ut 00:03:35.105 LINK ftl_sb_ut 00:03:35.105 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:35.105 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:35.364 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:35.364 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:35.623 LINK nvme_quirks_ut 00:03:35.883 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:35.883 LINK nvme_poll_group_ut 00:03:36.143 LINK nvme_ns_ocssd_cmd_ut 00:03:36.143 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:36.143 LINK nvme_ns_cmd_ut 00:03:36.143 LINK nvme_transport_ut 00:03:36.402 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:36.402 LINK nvme_qpair_ut 00:03:36.402 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:36.402 LINK nvme_io_msg_ut 00:03:36.660 LINK nvme_pcie_ut 00:03:36.660 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:36.918 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:36.918 LINK nvme_fabric_ut 00:03:37.176 LINK nvme_opal_ut 00:03:37.176 LINK transport_ut 00:03:37.434 LINK nvme_pcie_common_ut 00:03:38.001 LINK nvme_tcp_ut 00:03:38.001 LINK rdma_ut 00:03:38.260 LINK nvme_cuse_ut 00:03:38.828 LINK nvme_rdma_ut 00:03:39.087 00:03:39.087 real 2m3.390s 00:03:39.087 user 9m46.555s 00:03:39.087 sys 1m55.880s 00:03:39.087 13:27:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:39.087 13:27:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.087 ************************************ 00:03:39.087 END TEST unittest_build 00:03:39.087 ************************************ 00:03:39.087 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:39.087 13:27:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:39.087 13:27:18 -- nvmf/common.sh@7 -- # uname -s 00:03:39.087 13:27:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.087 13:27:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.087 13:27:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.087 13:27:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.087 13:27:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.087 13:27:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.087 13:27:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.087 13:27:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.087 13:27:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.087 13:27:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.087 13:27:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d7033353-3381-472d-ad33-19b60c92d84e 00:03:39.087 13:27:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=d7033353-3381-472d-ad33-19b60c92d84e 00:03:39.087 13:27:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.087 13:27:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.087 13:27:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:39.087 13:27:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:39.087 13:27:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.087 13:27:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.087 13:27:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.087 13:27:18 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:39.087 13:27:18 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:39.087 13:27:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:39.087 13:27:18 -- paths/export.sh@5 -- # export PATH 00:03:39.087 13:27:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:39.087 13:27:18 -- nvmf/common.sh@46 -- # : 0 00:03:39.087 13:27:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:39.087 13:27:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:39.087 13:27:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:39.087 13:27:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.087 13:27:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.087 13:27:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:39.087 13:27:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:39.087 13:27:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:39.087 13:27:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.087 13:27:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.087 13:27:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.087 13:27:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:39.087 13:27:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.087 13:27:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.087 13:27:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.087 13:27:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:39.656 13:27:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:39.656 13:27:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:39.656 13:27:18 -- spdk/autotest.sh@48 -- # udevadm_pid=93920 00:03:39.656 13:27:18 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:39.656 13:27:18 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:39.656 13:27:19 -- spdk/autotest.sh@54 -- # echo 93960 00:03:39.656 13:27:19 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:39.914 13:27:19 -- spdk/autotest.sh@56 -- # echo 94044 00:03:39.914 13:27:19 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:39.914 13:27:19 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:39.914 13:27:19 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.914 13:27:19 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:39.914 13:27:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:39.914 13:27:19 -- common/autotest_common.sh@10 -- # set +x 00:03:39.914 13:27:19 -- spdk/autotest.sh@70 -- # create_test_list 00:03:39.914 13:27:19 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:39.914 13:27:19 -- common/autotest_common.sh@10 -- # set +x 00:03:39.914 13:27:19 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:39.914 13:27:19 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:39.914 13:27:19 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:39.914 13:27:19 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:39.914 13:27:19 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:39.914 13:27:19 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:39.914 13:27:19 -- common/autotest_common.sh@1440 -- # uname 00:03:39.914 13:27:19 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:39.914 13:27:19 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:39.914 13:27:19 -- common/autotest_common.sh@1460 -- # uname 00:03:39.914 13:27:19 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:39.914 13:27:19 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:39.914 13:27:19 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:39.914 13:27:19 -- spdk/autotest.sh@83 -- # hash lcov 00:03:39.914 13:27:19 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:39.914 13:27:19 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:39.914 --rc lcov_branch_coverage=1 00:03:39.914 --rc lcov_function_coverage=1 00:03:39.914 --rc genhtml_branch_coverage=1 00:03:39.914 --rc genhtml_function_coverage=1 00:03:39.914 --rc genhtml_legend=1 00:03:39.914 --rc geninfo_all_blocks=1 00:03:39.914 ' 00:03:39.914 13:27:19 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:39.914 --rc lcov_branch_coverage=1 00:03:39.914 --rc lcov_function_coverage=1 00:03:39.914 --rc genhtml_branch_coverage=1 00:03:39.914 --rc genhtml_function_coverage=1 00:03:39.914 --rc genhtml_legend=1 00:03:39.914 --rc geninfo_all_blocks=1 00:03:39.914 ' 00:03:39.914 13:27:19 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:39.914 --rc lcov_branch_coverage=1 00:03:39.914 --rc lcov_function_coverage=1 00:03:39.914 --rc genhtml_branch_coverage=1 00:03:39.914 --rc genhtml_function_coverage=1 00:03:39.914 --rc genhtml_legend=1 00:03:39.914 --rc geninfo_all_blocks=1 00:03:39.914 --no-external' 00:03:39.914 13:27:19 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:39.914 --rc lcov_branch_coverage=1 00:03:39.914 --rc lcov_function_coverage=1 00:03:39.914 --rc genhtml_branch_coverage=1 00:03:39.914 --rc genhtml_function_coverage=1 00:03:39.914 --rc genhtml_legend=1 00:03:39.914 --rc geninfo_all_blocks=1 00:03:39.914 --no-external' 00:03:39.914 13:27:19 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:39.914 lcov: LCOV version 1.15 00:03:39.914 13:27:19 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:41.884 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:41.884 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:42.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:42.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:42.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:42.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:42.405 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:42.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:42.405 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:42.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:42.405 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:42.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:42.405 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:42.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:42.405 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:42.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:42.665 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:42.665 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:42.665 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:42.665 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:42.665 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:42.665 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:42.665 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:42.665 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:42.665 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:42.665 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:42.665 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:42.665 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:42.666 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:42.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:42.926 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:42.926 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:29.609 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:29.609 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:29.609 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:29.609 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:29.609 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:29.609 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:29.609 13:28:05 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:29.609 13:28:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.609 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:04:29.609 13:28:05 -- spdk/autotest.sh@102 -- # rm -f 00:04:29.609 13:28:05 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:29.609 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:29.609 13:28:06 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:29.609 13:28:06 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:29.609 13:28:06 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:29.609 13:28:06 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:29.609 13:28:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:29.609 13:28:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:29.609 13:28:06 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:29.609 13:28:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.609 13:28:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:29.609 13:28:06 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:29.609 13:28:06 -- spdk/autotest.sh@121 -- # grep -v p 00:04:29.609 13:28:06 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:29.609 13:28:06 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:29.609 13:28:06 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:29.609 13:28:06 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:29.609 13:28:06 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:29.609 13:28:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:29.609 No valid GPT data, bailing 00:04:29.609 13:28:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:29.609 13:28:06 -- scripts/common.sh@393 -- # pt= 00:04:29.609 13:28:06 -- scripts/common.sh@394 -- # return 1 00:04:29.609 13:28:06 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:29.609 1+0 records in 00:04:29.609 1+0 records out 00:04:29.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275131 s, 38.1 MB/s 00:04:29.609 13:28:06 -- spdk/autotest.sh@129 -- # sync 00:04:29.609 13:28:06 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:29.609 13:28:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:29.609 13:28:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:29.609 13:28:07 -- spdk/autotest.sh@135 -- # uname -s 00:04:29.609 13:28:08 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:29.609 13:28:08 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:29.609 13:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.609 13:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.609 13:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.609 ************************************ 00:04:29.609 START TEST setup.sh 00:04:29.609 ************************************ 00:04:29.609 13:28:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:29.609 * Looking for test storage... 00:04:29.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:29.609 13:28:08 -- setup/test-setup.sh@10 -- # uname -s 00:04:29.609 13:28:08 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:29.609 13:28:08 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:29.609 13:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.609 13:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.609 13:28:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.609 ************************************ 00:04:29.609 START TEST acl 00:04:29.609 ************************************ 00:04:29.609 13:28:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:29.609 * Looking for test storage... 00:04:29.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:29.609 13:28:08 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:29.609 13:28:08 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:29.609 13:28:08 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:29.609 13:28:08 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:29.609 13:28:08 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:29.609 13:28:08 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:29.609 13:28:08 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:29.609 13:28:08 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.609 13:28:08 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:29.609 13:28:08 -- setup/acl.sh@12 -- # devs=() 00:04:29.609 13:28:08 -- setup/acl.sh@12 -- # declare -a devs 00:04:29.609 13:28:08 -- setup/acl.sh@13 -- # drivers=() 00:04:29.609 13:28:08 -- setup/acl.sh@13 -- # declare -A drivers 00:04:29.609 13:28:08 -- setup/acl.sh@51 -- # setup reset 00:04:29.609 13:28:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.609 13:28:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.609 13:28:08 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:29.609 13:28:08 -- setup/acl.sh@16 -- # local dev driver 00:04:29.609 13:28:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.609 13:28:08 -- setup/acl.sh@15 -- # setup output status 00:04:29.609 13:28:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.609 13:28:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:29.868 Hugepages 00:04:29.868 node hugesize free / total 00:04:29.868 13:28:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.868 13:28:08 -- setup/acl.sh@19 -- # continue 00:04:29.868 13:28:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.868 00:04:29.868 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.868 13:28:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.868 13:28:09 -- setup/acl.sh@19 -- # continue 00:04:29.868 13:28:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.868 13:28:09 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:29.868 13:28:09 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:29.868 13:28:09 -- setup/acl.sh@20 -- # continue 00:04:29.868 13:28:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.126 13:28:09 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:30.126 13:28:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:30.126 13:28:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:30.126 13:28:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:30.126 13:28:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:30.126 13:28:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.126 13:28:09 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:30.126 13:28:09 -- setup/acl.sh@54 -- # run_test denied denied 00:04:30.126 13:28:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.126 13:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.126 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:04:30.126 ************************************ 00:04:30.126 START TEST denied 00:04:30.126 ************************************ 00:04:30.126 13:28:09 -- common/autotest_common.sh@1104 -- # denied 00:04:30.126 13:28:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:30.126 13:28:09 -- setup/acl.sh@38 -- # setup output config 00:04:30.126 13:28:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:30.126 13:28:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.126 13:28:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.502 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:31.502 13:28:10 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:31.502 13:28:10 -- setup/acl.sh@28 -- # local dev driver 00:04:31.502 13:28:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:31.502 13:28:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:31.502 13:28:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:31.503 13:28:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:31.503 13:28:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:31.503 13:28:10 -- setup/acl.sh@41 -- # setup reset 00:04:31.503 13:28:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.503 13:28:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.073 00:04:32.073 real 0m1.974s 00:04:32.073 user 0m0.527s 00:04:32.073 sys 0m1.527s 00:04:32.073 13:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.073 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 ************************************ 00:04:32.073 END TEST denied 00:04:32.073 ************************************ 00:04:32.073 13:28:11 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:32.073 13:28:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.073 13:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.073 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 ************************************ 00:04:32.073 START TEST allowed 00:04:32.073 ************************************ 00:04:32.073 13:28:11 -- common/autotest_common.sh@1104 -- # allowed 00:04:32.073 13:28:11 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:32.073 13:28:11 -- setup/acl.sh@45 -- # setup output config 00:04:32.073 13:28:11 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:32.073 13:28:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.073 13:28:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.983 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.983 13:28:12 -- setup/acl.sh@47 -- # verify 00:04:33.983 13:28:12 -- setup/acl.sh@28 -- # local dev driver 00:04:33.983 13:28:12 -- setup/acl.sh@48 -- # setup reset 00:04:33.983 13:28:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.984 13:28:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.242 00:04:34.242 real 0m2.137s 00:04:34.242 user 0m0.531s 00:04:34.243 sys 0m1.608s 00:04:34.243 13:28:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.243 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.243 ************************************ 00:04:34.243 END TEST allowed 00:04:34.243 ************************************ 00:04:34.243 00:04:34.243 real 0m5.360s 00:04:34.243 user 0m1.644s 00:04:34.243 sys 0m3.875s 00:04:34.243 13:28:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.243 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.243 ************************************ 00:04:34.243 END TEST acl 00:04:34.243 ************************************ 00:04:34.243 13:28:13 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:34.243 13:28:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.243 13:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.243 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.243 ************************************ 00:04:34.243 START TEST hugepages 00:04:34.243 ************************************ 00:04:34.243 13:28:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:34.503 * Looking for test storage... 00:04:34.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.503 13:28:13 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:34.503 13:28:13 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:34.503 13:28:13 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:34.503 13:28:13 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:34.503 13:28:13 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:34.503 13:28:13 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:34.503 13:28:13 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:34.503 13:28:13 -- setup/common.sh@18 -- # local node= 00:04:34.503 13:28:13 -- setup/common.sh@19 -- # local var val 00:04:34.503 13:28:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.503 13:28:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.503 13:28:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.503 13:28:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.503 13:28:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.503 13:28:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 3090720 kB' 'MemAvailable: 7409840 kB' 'Buffers: 37592 kB' 'Cached: 4407264 kB' 'SwapCached: 0 kB' 'Active: 1198200 kB' 'Inactive: 3371628 kB' 'Active(anon): 133984 kB' 'Inactive(anon): 1812 kB' 'Active(file): 1064216 kB' 'Inactive(file): 3369816 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 143428 kB' 'Mapped: 73556 kB' 'Shmem: 2624 kB' 'KReclaimable: 207160 kB' 'Slab: 299520 kB' 'SReclaimable: 207160 kB' 'SUnreclaim: 92360 kB' 'KernelStack: 4660 kB' 'PageTables: 4016 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028392 kB' 'Committed_AS: 624812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.503 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.503 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # continue 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.504 13:28:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.504 13:28:13 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.504 13:28:13 -- setup/common.sh@33 -- # echo 2048 00:04:34.504 13:28:13 -- setup/common.sh@33 -- # return 0 00:04:34.504 13:28:13 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:34.504 13:28:13 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:34.504 13:28:13 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:34.504 13:28:13 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:34.504 13:28:13 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:34.504 13:28:13 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:34.504 13:28:13 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:34.504 13:28:13 -- setup/hugepages.sh@207 -- # get_nodes 00:04:34.504 13:28:13 -- setup/hugepages.sh@27 -- # local node 00:04:34.504 13:28:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.504 13:28:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:34.505 13:28:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.505 13:28:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.505 13:28:13 -- setup/hugepages.sh@208 -- # clear_hp 00:04:34.505 13:28:13 -- setup/hugepages.sh@37 -- # local node hp 00:04:34.505 13:28:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.505 13:28:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.505 13:28:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:34.505 13:28:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.505 13:28:13 -- setup/hugepages.sh@41 -- # echo 0 00:04:34.505 13:28:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.505 13:28:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.505 13:28:13 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:34.505 13:28:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.505 13:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.505 13:28:13 -- common/autotest_common.sh@10 -- # set +x 00:04:34.505 ************************************ 00:04:34.505 START TEST default_setup 00:04:34.505 ************************************ 00:04:34.505 13:28:13 -- common/autotest_common.sh@1104 -- # default_setup 00:04:34.505 13:28:13 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:34.505 13:28:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.505 13:28:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.505 13:28:13 -- setup/hugepages.sh@51 -- # shift 00:04:34.505 13:28:13 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:34.505 13:28:13 -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.505 13:28:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.505 13:28:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.505 13:28:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.505 13:28:13 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:34.505 13:28:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.505 13:28:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.505 13:28:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.505 13:28:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.505 13:28:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.505 13:28:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.505 13:28:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.505 13:28:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.505 13:28:13 -- setup/hugepages.sh@73 -- # return 0 00:04:34.505 13:28:13 -- setup/hugepages.sh@137 -- # setup output 00:04:34.505 13:28:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.505 13:28:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:35.075 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.652 13:28:14 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:35.652 13:28:14 -- setup/hugepages.sh@89 -- # local node 00:04:35.652 13:28:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.652 13:28:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.652 13:28:14 -- setup/hugepages.sh@92 -- # local surp 00:04:35.652 13:28:14 -- setup/hugepages.sh@93 -- # local resv 00:04:35.652 13:28:14 -- setup/hugepages.sh@94 -- # local anon 00:04:35.652 13:28:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.652 13:28:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.652 13:28:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.652 13:28:14 -- setup/common.sh@18 -- # local node= 00:04:35.652 13:28:14 -- setup/common.sh@19 -- # local var val 00:04:35.652 13:28:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.652 13:28:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.652 13:28:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.652 13:28:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.652 13:28:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.652 13:28:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.652 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.652 13:28:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5188368 kB' 'MemAvailable: 9507464 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200488 kB' 'Inactive: 3371440 kB' 'Active(anon): 136056 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1064432 kB' 'Inactive(file): 3369640 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145392 kB' 'Mapped: 73672 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299256 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92160 kB' 'KernelStack: 4480 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 619776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:35.652 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.653 13:28:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.653 13:28:14 -- setup/common.sh@33 -- # echo 0 00:04:35.653 13:28:14 -- setup/common.sh@33 -- # return 0 00:04:35.653 13:28:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.653 13:28:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.653 13:28:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.653 13:28:14 -- setup/common.sh@18 -- # local node= 00:04:35.653 13:28:14 -- setup/common.sh@19 -- # local var val 00:04:35.653 13:28:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.653 13:28:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.653 13:28:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.653 13:28:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.653 13:28:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.653 13:28:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.653 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5188368 kB' 'MemAvailable: 9507464 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200748 kB' 'Inactive: 3371440 kB' 'Active(anon): 136316 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1064432 kB' 'Inactive(file): 3369640 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145652 kB' 'Mapped: 73672 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299256 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92160 kB' 'KernelStack: 4480 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 619776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.654 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.654 13:28:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.655 13:28:14 -- setup/common.sh@33 -- # echo 0 00:04:35.655 13:28:14 -- setup/common.sh@33 -- # return 0 00:04:35.655 13:28:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.655 13:28:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.655 13:28:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.655 13:28:14 -- setup/common.sh@18 -- # local node= 00:04:35.655 13:28:14 -- setup/common.sh@19 -- # local var val 00:04:35.655 13:28:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.655 13:28:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.655 13:28:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.655 13:28:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.655 13:28:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.655 13:28:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5188368 kB' 'MemAvailable: 9507464 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1201008 kB' 'Inactive: 3371440 kB' 'Active(anon): 136576 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1064432 kB' 'Inactive(file): 3369640 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145912 kB' 'Mapped: 73672 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299256 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92160 kB' 'KernelStack: 4480 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 625380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.655 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.655 13:28:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.656 13:28:14 -- setup/common.sh@33 -- # echo 0 00:04:35.656 13:28:14 -- setup/common.sh@33 -- # return 0 00:04:35.656 13:28:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.656 13:28:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.656 nr_hugepages=1024 00:04:35.656 resv_hugepages=0 00:04:35.656 13:28:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.656 surplus_hugepages=0 00:04:35.656 13:28:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.656 anon_hugepages=0 00:04:35.656 13:28:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.656 13:28:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.656 13:28:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.656 13:28:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.656 13:28:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.656 13:28:14 -- setup/common.sh@18 -- # local node= 00:04:35.656 13:28:14 -- setup/common.sh@19 -- # local var val 00:04:35.656 13:28:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.656 13:28:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.656 13:28:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.656 13:28:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.656 13:28:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.656 13:28:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5188856 kB' 'MemAvailable: 9507952 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200804 kB' 'Inactive: 3371440 kB' 'Active(anon): 136372 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1064432 kB' 'Inactive(file): 3369640 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145828 kB' 'Mapped: 73672 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299256 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92160 kB' 'KernelStack: 4596 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 623916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14336 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.656 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.656 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.657 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.657 13:28:14 -- setup/common.sh@33 -- # echo 1024 00:04:35.657 13:28:14 -- setup/common.sh@33 -- # return 0 00:04:35.657 13:28:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.657 13:28:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.657 13:28:14 -- setup/hugepages.sh@27 -- # local node 00:04:35.657 13:28:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.657 13:28:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.657 13:28:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.657 13:28:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.657 13:28:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.657 13:28:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.657 13:28:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.657 13:28:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.657 13:28:14 -- setup/common.sh@18 -- # local node=0 00:04:35.657 13:28:14 -- setup/common.sh@19 -- # local var val 00:04:35.657 13:28:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.657 13:28:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.657 13:28:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.657 13:28:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.657 13:28:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.657 13:28:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.657 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5188532 kB' 'MemUsed: 7062560 kB' 'Active: 1200636 kB' 'Inactive: 3371440 kB' 'Active(anon): 136204 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1064432 kB' 'Inactive(file): 3369640 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'FilePages: 4444892 kB' 'Mapped: 73672 kB' 'AnonPages: 145144 kB' 'Shmem: 2616 kB' 'KernelStack: 4648 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207096 kB' 'Slab: 299256 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # continue 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.658 13:28:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.658 13:28:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.658 13:28:14 -- setup/common.sh@33 -- # echo 0 00:04:35.658 13:28:14 -- setup/common.sh@33 -- # return 0 00:04:35.658 13:28:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.658 13:28:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.658 13:28:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.658 13:28:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.658 13:28:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.658 node0=1024 expecting 1024 00:04:35.658 13:28:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.658 00:04:35.658 real 0m1.171s 00:04:35.658 user 0m0.317s 00:04:35.658 sys 0m0.835s 00:04:35.658 13:28:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.658 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.658 ************************************ 00:04:35.658 END TEST default_setup 00:04:35.658 ************************************ 00:04:35.658 13:28:14 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:35.658 13:28:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.658 13:28:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.658 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.658 ************************************ 00:04:35.658 START TEST per_node_1G_alloc 00:04:35.658 ************************************ 00:04:35.658 13:28:14 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:35.658 13:28:14 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:35.658 13:28:14 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:35.658 13:28:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:35.658 13:28:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.658 13:28:14 -- setup/hugepages.sh@51 -- # shift 00:04:35.658 13:28:14 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:35.658 13:28:14 -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.658 13:28:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.658 13:28:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:35.658 13:28:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.659 13:28:14 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:35.659 13:28:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.659 13:28:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:35.659 13:28:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.659 13:28:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.659 13:28:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.659 13:28:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.659 13:28:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.659 13:28:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:35.659 13:28:14 -- setup/hugepages.sh@73 -- # return 0 00:04:35.659 13:28:14 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:35.659 13:28:14 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:35.659 13:28:14 -- setup/hugepages.sh@146 -- # setup output 00:04:35.659 13:28:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.659 13:28:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:36.193 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.455 13:28:15 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:36.455 13:28:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:36.455 13:28:15 -- setup/hugepages.sh@89 -- # local node 00:04:36.455 13:28:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.455 13:28:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.455 13:28:15 -- setup/hugepages.sh@92 -- # local surp 00:04:36.455 13:28:15 -- setup/hugepages.sh@93 -- # local resv 00:04:36.455 13:28:15 -- setup/hugepages.sh@94 -- # local anon 00:04:36.455 13:28:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.455 13:28:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.455 13:28:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.455 13:28:15 -- setup/common.sh@18 -- # local node= 00:04:36.455 13:28:15 -- setup/common.sh@19 -- # local var val 00:04:36.455 13:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.455 13:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.455 13:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.455 13:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.455 13:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.455 13:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.455 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.455 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.455 13:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6237108 kB' 'MemAvailable: 10556208 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200760 kB' 'Inactive: 3371408 kB' 'Active(anon): 136288 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064472 kB' 'Inactive(file): 3369604 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145780 kB' 'Mapped: 73948 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299376 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92280 kB' 'KernelStack: 4644 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 623516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14288 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:36.455 13:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.455 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.455 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.455 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.455 13:28:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.455 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.455 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.456 13:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.456 13:28:15 -- setup/common.sh@33 -- # echo 0 00:04:36.456 13:28:15 -- setup/common.sh@33 -- # return 0 00:04:36.456 13:28:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:36.456 13:28:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.456 13:28:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.456 13:28:15 -- setup/common.sh@18 -- # local node= 00:04:36.456 13:28:15 -- setup/common.sh@19 -- # local var val 00:04:36.456 13:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.456 13:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.456 13:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.456 13:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.456 13:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.456 13:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.456 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6237108 kB' 'MemAvailable: 10556208 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200760 kB' 'Inactive: 3371408 kB' 'Active(anon): 136288 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064472 kB' 'Inactive(file): 3369604 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145780 kB' 'Mapped: 73948 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299376 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92280 kB' 'KernelStack: 4644 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 623516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.457 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.457 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.458 13:28:15 -- setup/common.sh@33 -- # echo 0 00:04:36.458 13:28:15 -- setup/common.sh@33 -- # return 0 00:04:36.458 13:28:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:36.458 13:28:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.458 13:28:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.458 13:28:15 -- setup/common.sh@18 -- # local node= 00:04:36.458 13:28:15 -- setup/common.sh@19 -- # local var val 00:04:36.458 13:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.458 13:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.458 13:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.458 13:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.458 13:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.458 13:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6237384 kB' 'MemAvailable: 10556484 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200684 kB' 'Inactive: 3371408 kB' 'Active(anon): 136212 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064472 kB' 'Inactive(file): 3369604 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145516 kB' 'Mapped: 73900 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299376 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92280 kB' 'KernelStack: 4596 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 629032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.458 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.458 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.459 13:28:15 -- setup/common.sh@33 -- # echo 0 00:04:36.459 13:28:15 -- setup/common.sh@33 -- # return 0 00:04:36.459 13:28:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:36.459 13:28:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:36.459 nr_hugepages=512 00:04:36.459 13:28:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.459 resv_hugepages=0 00:04:36.459 surplus_hugepages=0 00:04:36.459 13:28:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.459 anon_hugepages=0 00:04:36.459 13:28:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.459 13:28:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:36.459 13:28:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:36.459 13:28:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.459 13:28:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.459 13:28:15 -- setup/common.sh@18 -- # local node= 00:04:36.459 13:28:15 -- setup/common.sh@19 -- # local var val 00:04:36.459 13:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.459 13:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.459 13:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.459 13:28:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.459 13:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.459 13:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6237864 kB' 'MemAvailable: 10556964 kB' 'Buffers: 37592 kB' 'Cached: 4407300 kB' 'SwapCached: 0 kB' 'Active: 1200684 kB' 'Inactive: 3371408 kB' 'Active(anon): 136212 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064472 kB' 'Inactive(file): 3369604 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'AnonPages: 145388 kB' 'Mapped: 73900 kB' 'Shmem: 2616 kB' 'KReclaimable: 207096 kB' 'Slab: 299376 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92280 kB' 'KernelStack: 4596 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 633860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14264 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.459 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.459 13:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.460 13:28:15 -- setup/common.sh@33 -- # echo 512 00:04:36.460 13:28:15 -- setup/common.sh@33 -- # return 0 00:04:36.460 13:28:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:36.460 13:28:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.460 13:28:15 -- setup/hugepages.sh@27 -- # local node 00:04:36.460 13:28:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.460 13:28:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.460 13:28:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.460 13:28:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.460 13:28:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.460 13:28:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.460 13:28:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.460 13:28:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.460 13:28:15 -- setup/common.sh@18 -- # local node=0 00:04:36.460 13:28:15 -- setup/common.sh@19 -- # local var val 00:04:36.460 13:28:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.460 13:28:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.460 13:28:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.460 13:28:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.460 13:28:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.460 13:28:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.460 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.460 13:28:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6238360 kB' 'MemUsed: 6012732 kB' 'Active: 1200248 kB' 'Inactive: 3371408 kB' 'Active(anon): 135776 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064472 kB' 'Inactive(file): 3369604 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 408 kB' 'Writeback: 0 kB' 'FilePages: 4444892 kB' 'Mapped: 73672 kB' 'AnonPages: 145580 kB' 'Shmem: 2616 kB' 'KernelStack: 4640 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207096 kB' 'Slab: 299384 kB' 'SReclaimable: 207096 kB' 'SUnreclaim: 92288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.460 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # continue 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.461 13:28:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.461 13:28:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.461 13:28:15 -- setup/common.sh@33 -- # echo 0 00:04:36.461 13:28:15 -- setup/common.sh@33 -- # return 0 00:04:36.461 13:28:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.461 13:28:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.461 13:28:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.461 13:28:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:36.461 node0=512 expecting 512 00:04:36.461 13:28:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:36.461 00:04:36.461 real 0m0.718s 00:04:36.461 user 0m0.241s 00:04:36.461 sys 0m0.510s 00:04:36.461 13:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.461 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.461 ************************************ 00:04:36.461 END TEST per_node_1G_alloc 00:04:36.461 ************************************ 00:04:36.461 13:28:15 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:36.461 13:28:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.461 13:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.461 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.461 ************************************ 00:04:36.461 START TEST even_2G_alloc 00:04:36.461 ************************************ 00:04:36.461 13:28:15 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:36.461 13:28:15 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:36.461 13:28:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.461 13:28:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.461 13:28:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.461 13:28:15 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:36.461 13:28:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.461 13:28:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.461 13:28:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:36.461 13:28:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.461 13:28:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.461 13:28:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:36.461 13:28:15 -- setup/hugepages.sh@83 -- # : 0 00:04:36.461 13:28:15 -- setup/hugepages.sh@84 -- # : 0 00:04:36.461 13:28:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.461 13:28:15 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:36.461 13:28:15 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:36.462 13:28:15 -- setup/hugepages.sh@153 -- # setup output 00:04:36.462 13:28:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.462 13:28:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.031 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.291 13:28:16 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:37.291 13:28:16 -- setup/hugepages.sh@89 -- # local node 00:04:37.291 13:28:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.291 13:28:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.291 13:28:16 -- setup/hugepages.sh@92 -- # local surp 00:04:37.291 13:28:16 -- setup/hugepages.sh@93 -- # local resv 00:04:37.291 13:28:16 -- setup/hugepages.sh@94 -- # local anon 00:04:37.291 13:28:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.291 13:28:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.291 13:28:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.291 13:28:16 -- setup/common.sh@18 -- # local node= 00:04:37.292 13:28:16 -- setup/common.sh@19 -- # local var val 00:04:37.292 13:28:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.292 13:28:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.292 13:28:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.292 13:28:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.292 13:28:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.292 13:28:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5186888 kB' 'MemAvailable: 9505980 kB' 'Buffers: 37592 kB' 'Cached: 4407328 kB' 'SwapCached: 0 kB' 'Active: 1202808 kB' 'Inactive: 3371428 kB' 'Active(anon): 138328 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064480 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 147492 kB' 'Mapped: 73636 kB' 'Shmem: 2616 kB' 'KReclaimable: 207060 kB' 'Slab: 298940 kB' 'SReclaimable: 207060 kB' 'SUnreclaim: 91880 kB' 'KernelStack: 4624 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 630460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14336 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.292 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.292 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.293 13:28:16 -- setup/common.sh@33 -- # echo 0 00:04:37.293 13:28:16 -- setup/common.sh@33 -- # return 0 00:04:37.293 13:28:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:37.293 13:28:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.293 13:28:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.293 13:28:16 -- setup/common.sh@18 -- # local node= 00:04:37.293 13:28:16 -- setup/common.sh@19 -- # local var val 00:04:37.293 13:28:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.293 13:28:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.293 13:28:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.293 13:28:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.293 13:28:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.293 13:28:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5186936 kB' 'MemAvailable: 9506028 kB' 'Buffers: 37592 kB' 'Cached: 4407328 kB' 'SwapCached: 0 kB' 'Active: 1202568 kB' 'Inactive: 3371428 kB' 'Active(anon): 138088 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064480 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 147828 kB' 'Mapped: 73668 kB' 'Shmem: 2616 kB' 'KReclaimable: 207060 kB' 'Slab: 298924 kB' 'SReclaimable: 207060 kB' 'SUnreclaim: 91864 kB' 'KernelStack: 4624 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 636412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14336 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.293 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.293 13:28:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.294 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.294 13:28:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.556 13:28:16 -- setup/common.sh@33 -- # echo 0 00:04:37.556 13:28:16 -- setup/common.sh@33 -- # return 0 00:04:37.556 13:28:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:37.556 13:28:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.556 13:28:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.556 13:28:16 -- setup/common.sh@18 -- # local node= 00:04:37.556 13:28:16 -- setup/common.sh@19 -- # local var val 00:04:37.556 13:28:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.556 13:28:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.556 13:28:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.556 13:28:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.556 13:28:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.556 13:28:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5186944 kB' 'MemAvailable: 9506036 kB' 'Buffers: 37592 kB' 'Cached: 4407328 kB' 'SwapCached: 0 kB' 'Active: 1202592 kB' 'Inactive: 3371428 kB' 'Active(anon): 138112 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064480 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 147560 kB' 'Mapped: 73668 kB' 'Shmem: 2616 kB' 'KReclaimable: 207060 kB' 'Slab: 298924 kB' 'SReclaimable: 207060 kB' 'SUnreclaim: 91864 kB' 'KernelStack: 4592 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 636412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14352 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.556 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.556 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.557 13:28:16 -- setup/common.sh@33 -- # echo 0 00:04:37.557 13:28:16 -- setup/common.sh@33 -- # return 0 00:04:37.557 13:28:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:37.557 13:28:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.557 nr_hugepages=1024 00:04:37.557 resv_hugepages=0 00:04:37.557 surplus_hugepages=0 00:04:37.557 13:28:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.557 13:28:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.557 anon_hugepages=0 00:04:37.557 13:28:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.557 13:28:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.557 13:28:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.557 13:28:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.557 13:28:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.557 13:28:16 -- setup/common.sh@18 -- # local node= 00:04:37.557 13:28:16 -- setup/common.sh@19 -- # local var val 00:04:37.557 13:28:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.557 13:28:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.557 13:28:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.557 13:28:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.557 13:28:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.557 13:28:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5187228 kB' 'MemAvailable: 9506324 kB' 'Buffers: 37592 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202644 kB' 'Inactive: 3371432 kB' 'Active(anon): 138164 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064480 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 147692 kB' 'Mapped: 73684 kB' 'Shmem: 2616 kB' 'KReclaimable: 207060 kB' 'Slab: 298940 kB' 'SReclaimable: 207060 kB' 'SUnreclaim: 91880 kB' 'KernelStack: 4580 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 635924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14368 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.557 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.557 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.558 13:28:16 -- setup/common.sh@33 -- # echo 1024 00:04:37.558 13:28:16 -- setup/common.sh@33 -- # return 0 00:04:37.558 13:28:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.558 13:28:16 -- setup/hugepages.sh@27 -- # local node 00:04:37.558 13:28:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.558 13:28:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.558 13:28:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.558 13:28:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.558 13:28:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.558 13:28:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.558 13:28:16 -- setup/common.sh@18 -- # local node=0 00:04:37.558 13:28:16 -- setup/common.sh@19 -- # local var val 00:04:37.558 13:28:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.558 13:28:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.558 13:28:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.558 13:28:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.558 13:28:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.558 13:28:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5186920 kB' 'MemUsed: 7064172 kB' 'Active: 1202752 kB' 'Inactive: 3371432 kB' 'Active(anon): 138272 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064480 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 4444924 kB' 'Mapped: 73684 kB' 'AnonPages: 147792 kB' 'Shmem: 2616 kB' 'KernelStack: 4564 kB' 'PageTables: 3468 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207060 kB' 'Slab: 298940 kB' 'SReclaimable: 207060 kB' 'SUnreclaim: 91880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # continue 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.558 13:28:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.558 13:28:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.558 13:28:16 -- setup/common.sh@33 -- # echo 0 00:04:37.558 13:28:16 -- setup/common.sh@33 -- # return 0 00:04:37.558 13:28:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.558 13:28:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.558 13:28:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.558 13:28:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.558 node0=1024 expecting 1024 00:04:37.558 13:28:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.558 00:04:37.558 real 0m0.962s 00:04:37.558 user 0m0.277s 00:04:37.558 sys 0m0.719s 00:04:37.558 13:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.558 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:37.558 ************************************ 00:04:37.558 END TEST even_2G_alloc 00:04:37.558 ************************************ 00:04:37.558 13:28:16 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.558 13:28:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.558 13:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.558 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:37.558 ************************************ 00:04:37.558 START TEST odd_alloc 00:04:37.558 ************************************ 00:04:37.558 13:28:16 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:37.558 13:28:16 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.558 13:28:16 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.558 13:28:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.558 13:28:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.558 13:28:16 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:37.558 13:28:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.558 13:28:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.558 13:28:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.558 13:28:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.558 13:28:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.558 13:28:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:37.558 13:28:16 -- setup/hugepages.sh@83 -- # : 0 00:04:37.558 13:28:16 -- setup/hugepages.sh@84 -- # : 0 00:04:37.558 13:28:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.558 13:28:16 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.558 13:28:16 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.558 13:28:16 -- setup/hugepages.sh@160 -- # setup output 00:04:37.558 13:28:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.558 13:28:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.816 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:38.388 13:28:17 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:38.388 13:28:17 -- setup/hugepages.sh@89 -- # local node 00:04:38.388 13:28:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.388 13:28:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.388 13:28:17 -- setup/hugepages.sh@92 -- # local surp 00:04:38.388 13:28:17 -- setup/hugepages.sh@93 -- # local resv 00:04:38.388 13:28:17 -- setup/hugepages.sh@94 -- # local anon 00:04:38.388 13:28:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.388 13:28:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.388 13:28:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.388 13:28:17 -- setup/common.sh@18 -- # local node= 00:04:38.388 13:28:17 -- setup/common.sh@19 -- # local var val 00:04:38.388 13:28:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.388 13:28:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.388 13:28:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.388 13:28:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.388 13:28:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.388 13:28:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.388 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.388 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.388 13:28:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5184664 kB' 'MemAvailable: 9503776 kB' 'Buffers: 37600 kB' 'Cached: 4407324 kB' 'SwapCached: 0 kB' 'Active: 1202864 kB' 'Inactive: 3371432 kB' 'Active(anon): 138384 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064480 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 147664 kB' 'Mapped: 73968 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299248 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92172 kB' 'KernelStack: 4644 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 636776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14352 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:38.388 13:28:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.388 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.388 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.388 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.388 13:28:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.388 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.388 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.389 13:28:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.389 13:28:17 -- setup/common.sh@33 -- # echo 0 00:04:38.389 13:28:17 -- setup/common.sh@33 -- # return 0 00:04:38.389 13:28:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:38.389 13:28:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.389 13:28:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.389 13:28:17 -- setup/common.sh@18 -- # local node= 00:04:38.389 13:28:17 -- setup/common.sh@19 -- # local var val 00:04:38.389 13:28:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.389 13:28:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.389 13:28:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.389 13:28:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.389 13:28:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.389 13:28:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.389 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5185108 kB' 'MemAvailable: 9504228 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202636 kB' 'Inactive: 3371428 kB' 'Active(anon): 138144 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 147672 kB' 'Mapped: 73968 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299072 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91996 kB' 'KernelStack: 4628 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 636776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14352 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.390 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.390 13:28:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.391 13:28:17 -- setup/common.sh@33 -- # echo 0 00:04:38.391 13:28:17 -- setup/common.sh@33 -- # return 0 00:04:38.391 13:28:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:38.391 13:28:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.391 13:28:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.391 13:28:17 -- setup/common.sh@18 -- # local node= 00:04:38.391 13:28:17 -- setup/common.sh@19 -- # local var val 00:04:38.391 13:28:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.391 13:28:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.391 13:28:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.391 13:28:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.391 13:28:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.391 13:28:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5185328 kB' 'MemAvailable: 9504448 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202428 kB' 'Inactive: 3371428 kB' 'Active(anon): 137936 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 147044 kB' 'Mapped: 73684 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299088 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92012 kB' 'KernelStack: 4560 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 641844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14368 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.391 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.391 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.392 13:28:17 -- setup/common.sh@33 -- # echo 0 00:04:38.392 13:28:17 -- setup/common.sh@33 -- # return 0 00:04:38.392 13:28:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:38.392 13:28:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:38.392 nr_hugepages=1025 00:04:38.392 resv_hugepages=0 00:04:38.392 13:28:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.392 surplus_hugepages=0 00:04:38.392 13:28:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.392 anon_hugepages=0 00:04:38.392 13:28:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.392 13:28:17 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.392 13:28:17 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:38.392 13:28:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.392 13:28:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.392 13:28:17 -- setup/common.sh@18 -- # local node= 00:04:38.392 13:28:17 -- setup/common.sh@19 -- # local var val 00:04:38.392 13:28:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.392 13:28:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.392 13:28:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.392 13:28:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.392 13:28:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.392 13:28:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5185280 kB' 'MemAvailable: 9504400 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202736 kB' 'Inactive: 3371428 kB' 'Active(anon): 138244 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 147756 kB' 'Mapped: 73684 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299112 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92036 kB' 'KernelStack: 4592 kB' 'PageTables: 3604 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075944 kB' 'Committed_AS: 641244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14368 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.392 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.392 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.393 13:28:17 -- setup/common.sh@33 -- # echo 1025 00:04:38.393 13:28:17 -- setup/common.sh@33 -- # return 0 00:04:38.393 13:28:17 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.393 13:28:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.393 13:28:17 -- setup/hugepages.sh@27 -- # local node 00:04:38.393 13:28:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.393 13:28:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:38.393 13:28:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:38.393 13:28:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.393 13:28:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.393 13:28:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.393 13:28:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.394 13:28:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.394 13:28:17 -- setup/common.sh@18 -- # local node=0 00:04:38.394 13:28:17 -- setup/common.sh@19 -- # local var val 00:04:38.394 13:28:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.394 13:28:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.394 13:28:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.394 13:28:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.394 13:28:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.394 13:28:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5185004 kB' 'MemUsed: 7066088 kB' 'Active: 1202892 kB' 'Inactive: 3371428 kB' 'Active(anon): 138400 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 4444932 kB' 'Mapped: 73684 kB' 'AnonPages: 147912 kB' 'Shmem: 2616 kB' 'KernelStack: 4644 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207076 kB' 'Slab: 299112 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # continue 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 13:28:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 13:28:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.394 13:28:17 -- setup/common.sh@33 -- # echo 0 00:04:38.394 13:28:17 -- setup/common.sh@33 -- # return 0 00:04:38.394 13:28:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.394 13:28:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.394 13:28:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.394 13:28:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.394 13:28:17 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:38.394 node0=1025 expecting 1025 00:04:38.394 13:28:17 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:38.394 00:04:38.394 real 0m0.893s 00:04:38.394 user 0m0.265s 00:04:38.394 sys 0m0.660s 00:04:38.394 13:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.394 13:28:17 -- common/autotest_common.sh@10 -- # set +x 00:04:38.394 ************************************ 00:04:38.394 END TEST odd_alloc 00:04:38.394 ************************************ 00:04:38.394 13:28:17 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:38.394 13:28:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.394 13:28:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.394 13:28:17 -- common/autotest_common.sh@10 -- # set +x 00:04:38.394 ************************************ 00:04:38.395 START TEST custom_alloc 00:04:38.395 ************************************ 00:04:38.395 13:28:17 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:38.395 13:28:17 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:38.395 13:28:17 -- setup/hugepages.sh@169 -- # local node 00:04:38.395 13:28:17 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:38.395 13:28:17 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:38.395 13:28:17 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:38.395 13:28:17 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:38.395 13:28:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.395 13:28:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.395 13:28:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.395 13:28:17 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:38.395 13:28:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.395 13:28:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.395 13:28:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.395 13:28:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.395 13:28:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.395 13:28:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:38.395 13:28:17 -- setup/hugepages.sh@83 -- # : 0 00:04:38.395 13:28:17 -- setup/hugepages.sh@84 -- # : 0 00:04:38.395 13:28:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:38.395 13:28:17 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.395 13:28:17 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.395 13:28:17 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:38.395 13:28:17 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:38.395 13:28:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.395 13:28:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.395 13:28:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.395 13:28:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.395 13:28:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.395 13:28:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:38.395 13:28:17 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.395 13:28:17 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.395 13:28:17 -- setup/hugepages.sh@78 -- # return 0 00:04:38.395 13:28:17 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:38.395 13:28:17 -- setup/hugepages.sh@187 -- # setup output 00:04:38.395 13:28:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.395 13:28:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:38.914 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.191 13:28:18 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:39.191 13:28:18 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:39.191 13:28:18 -- setup/hugepages.sh@89 -- # local node 00:04:39.191 13:28:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.191 13:28:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.191 13:28:18 -- setup/hugepages.sh@92 -- # local surp 00:04:39.191 13:28:18 -- setup/hugepages.sh@93 -- # local resv 00:04:39.191 13:28:18 -- setup/hugepages.sh@94 -- # local anon 00:04:39.191 13:28:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.191 13:28:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.191 13:28:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.191 13:28:18 -- setup/common.sh@18 -- # local node= 00:04:39.191 13:28:18 -- setup/common.sh@19 -- # local var val 00:04:39.191 13:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.191 13:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.191 13:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.191 13:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.191 13:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.191 13:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6233920 kB' 'MemAvailable: 10553040 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202688 kB' 'Inactive: 3371432 kB' 'Active(anon): 138200 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064488 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 147736 kB' 'Mapped: 73696 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299024 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91948 kB' 'KernelStack: 4588 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 646096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14400 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.191 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.191 13:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.192 13:28:18 -- setup/common.sh@33 -- # echo 0 00:04:39.192 13:28:18 -- setup/common.sh@33 -- # return 0 00:04:39.192 13:28:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:39.192 13:28:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.192 13:28:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.192 13:28:18 -- setup/common.sh@18 -- # local node= 00:04:39.192 13:28:18 -- setup/common.sh@19 -- # local var val 00:04:39.192 13:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.192 13:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.192 13:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.192 13:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.192 13:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.192 13:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6234140 kB' 'MemAvailable: 10553260 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202700 kB' 'Inactive: 3371432 kB' 'Active(anon): 138212 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064488 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 147740 kB' 'Mapped: 73684 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299112 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92036 kB' 'KernelStack: 4576 kB' 'PageTables: 3572 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 646096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14400 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.192 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.192 13:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.193 13:28:18 -- setup/common.sh@33 -- # echo 0 00:04:39.193 13:28:18 -- setup/common.sh@33 -- # return 0 00:04:39.193 13:28:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:39.193 13:28:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.193 13:28:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.193 13:28:18 -- setup/common.sh@18 -- # local node= 00:04:39.193 13:28:18 -- setup/common.sh@19 -- # local var val 00:04:39.193 13:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.193 13:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.193 13:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.193 13:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.193 13:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.193 13:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6234416 kB' 'MemAvailable: 10553536 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1202884 kB' 'Inactive: 3371432 kB' 'Active(anon): 138396 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064488 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 147604 kB' 'Mapped: 73684 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299112 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92036 kB' 'KernelStack: 4544 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 646096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14400 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.193 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.193 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.194 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.194 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.195 13:28:18 -- setup/common.sh@33 -- # echo 0 00:04:39.195 13:28:18 -- setup/common.sh@33 -- # return 0 00:04:39.195 13:28:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:39.195 nr_hugepages=512 00:04:39.195 13:28:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:39.195 resv_hugepages=0 00:04:39.195 13:28:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.195 surplus_hugepages=0 00:04:39.195 13:28:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.195 anon_hugepages=0 00:04:39.195 13:28:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.195 13:28:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:39.195 13:28:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:39.195 13:28:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.195 13:28:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.195 13:28:18 -- setup/common.sh@18 -- # local node= 00:04:39.195 13:28:18 -- setup/common.sh@19 -- # local var val 00:04:39.195 13:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.195 13:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.195 13:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.195 13:28:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.195 13:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.195 13:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6234676 kB' 'MemAvailable: 10553796 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1203144 kB' 'Inactive: 3371432 kB' 'Active(anon): 138656 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064488 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 148124 kB' 'Mapped: 73684 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299112 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92036 kB' 'KernelStack: 4612 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601256 kB' 'Committed_AS: 646132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14416 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.195 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.195 13:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.196 13:28:18 -- setup/common.sh@33 -- # echo 512 00:04:39.196 13:28:18 -- setup/common.sh@33 -- # return 0 00:04:39.196 13:28:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:39.196 13:28:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.196 13:28:18 -- setup/hugepages.sh@27 -- # local node 00:04:39.196 13:28:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.196 13:28:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.196 13:28:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.196 13:28:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.196 13:28:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.196 13:28:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.196 13:28:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.196 13:28:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.196 13:28:18 -- setup/common.sh@18 -- # local node=0 00:04:39.196 13:28:18 -- setup/common.sh@19 -- # local var val 00:04:39.196 13:28:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.196 13:28:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.196 13:28:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.196 13:28:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.196 13:28:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.196 13:28:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 6234636 kB' 'MemUsed: 6016456 kB' 'Active: 1202836 kB' 'Inactive: 3371432 kB' 'Active(anon): 138348 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064488 kB' 'Inactive(file): 3369628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 4444932 kB' 'Mapped: 73684 kB' 'AnonPages: 147820 kB' 'Shmem: 2616 kB' 'KernelStack: 4640 kB' 'PageTables: 3672 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207076 kB' 'Slab: 299112 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.196 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.196 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # continue 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.197 13:28:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.197 13:28:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.197 13:28:18 -- setup/common.sh@33 -- # echo 0 00:04:39.197 13:28:18 -- setup/common.sh@33 -- # return 0 00:04:39.197 13:28:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.197 13:28:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.197 13:28:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.197 13:28:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.197 node0=512 expecting 512 00:04:39.197 13:28:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.197 13:28:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.197 00:04:39.197 real 0m0.678s 00:04:39.197 user 0m0.272s 00:04:39.197 sys 0m0.443s 00:04:39.197 13:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.197 13:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:39.197 ************************************ 00:04:39.197 END TEST custom_alloc 00:04:39.197 ************************************ 00:04:39.197 13:28:18 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:39.197 13:28:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.197 13:28:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.197 13:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:39.197 ************************************ 00:04:39.197 START TEST no_shrink_alloc 00:04:39.197 ************************************ 00:04:39.197 13:28:18 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:39.197 13:28:18 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:39.197 13:28:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.197 13:28:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.197 13:28:18 -- setup/hugepages.sh@51 -- # shift 00:04:39.197 13:28:18 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:39.197 13:28:18 -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.197 13:28:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.197 13:28:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.197 13:28:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.197 13:28:18 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:39.197 13:28:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.197 13:28:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.197 13:28:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.197 13:28:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.197 13:28:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.197 13:28:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.197 13:28:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.197 13:28:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.197 13:28:18 -- setup/hugepages.sh@73 -- # return 0 00:04:39.197 13:28:18 -- setup/hugepages.sh@198 -- # setup output 00:04:39.197 13:28:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.197 13:28:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:39.457 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.030 13:28:19 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:40.030 13:28:19 -- setup/hugepages.sh@89 -- # local node 00:04:40.030 13:28:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.030 13:28:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.030 13:28:19 -- setup/hugepages.sh@92 -- # local surp 00:04:40.030 13:28:19 -- setup/hugepages.sh@93 -- # local resv 00:04:40.030 13:28:19 -- setup/hugepages.sh@94 -- # local anon 00:04:40.030 13:28:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.030 13:28:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.030 13:28:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.030 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.030 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.030 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.030 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.030 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.030 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.030 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.030 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5198632 kB' 'MemAvailable: 9517752 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1189776 kB' 'Inactive: 3371428 kB' 'Active(anon): 125284 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 134172 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299096 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92020 kB' 'KernelStack: 4376 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 608140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14144 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.031 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.031 13:28:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:40.031 13:28:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.031 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.031 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.031 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.031 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.031 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.031 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.031 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.031 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.031 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5199080 kB' 'MemAvailable: 9518200 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1189588 kB' 'Inactive: 3371428 kB' 'Active(anon): 125096 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 134688 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299096 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92020 kB' 'KernelStack: 4352 kB' 'PageTables: 3048 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 613512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14144 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.033 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.033 13:28:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:40.033 13:28:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.033 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.033 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.033 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.033 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.033 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.033 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.033 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.033 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.033 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5199332 kB' 'MemAvailable: 9518452 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1189884 kB' 'Inactive: 3371428 kB' 'Active(anon): 125392 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 134972 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299096 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92020 kB' 'KernelStack: 4368 kB' 'PageTables: 3072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 613512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14160 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.034 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.034 13:28:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:40.034 13:28:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.034 nr_hugepages=1024 00:04:40.034 13:28:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.034 resv_hugepages=0 00:04:40.035 surplus_hugepages=0 00:04:40.035 13:28:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.035 anon_hugepages=0 00:04:40.035 13:28:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.035 13:28:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.035 13:28:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.035 13:28:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.035 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.035 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.035 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.035 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.035 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.035 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.035 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.035 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.035 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5199332 kB' 'MemAvailable: 9518452 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1189964 kB' 'Inactive: 3371428 kB' 'Active(anon): 125472 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 134664 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 299096 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92020 kB' 'KernelStack: 4336 kB' 'PageTables: 3024 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 611692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14160 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 13:28:19 -- setup/common.sh@33 -- # echo 1024 00:04:40.036 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.036 13:28:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.036 13:28:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.036 13:28:19 -- setup/hugepages.sh@27 -- # local node 00:04:40.036 13:28:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.036 13:28:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.037 13:28:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.037 13:28:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.037 13:28:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.037 13:28:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.037 13:28:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.037 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.037 13:28:19 -- setup/common.sh@18 -- # local node=0 00:04:40.037 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.037 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.037 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.037 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.037 13:28:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.037 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.037 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5199324 kB' 'MemUsed: 7051768 kB' 'Active: 1189848 kB' 'Inactive: 3371428 kB' 'Active(anon): 125356 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064492 kB' 'Inactive(file): 3369624 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 4444932 kB' 'Mapped: 72784 kB' 'AnonPages: 135100 kB' 'Shmem: 2616 kB' 'KernelStack: 4384 kB' 'PageTables: 3092 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207076 kB' 'Slab: 299096 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 92020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.038 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.038 13:28:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.038 13:28:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.038 13:28:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.038 13:28:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.038 node0=1024 expecting 1024 00:04:40.038 13:28:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.038 13:28:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.038 13:28:19 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:40.038 13:28:19 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:40.038 13:28:19 -- setup/hugepages.sh@202 -- # setup output 00:04:40.038 13:28:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.038 13:28:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:40.610 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.610 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:40.610 13:28:19 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:40.610 13:28:19 -- setup/hugepages.sh@89 -- # local node 00:04:40.610 13:28:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.610 13:28:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.610 13:28:19 -- setup/hugepages.sh@92 -- # local surp 00:04:40.610 13:28:19 -- setup/hugepages.sh@93 -- # local resv 00:04:40.610 13:28:19 -- setup/hugepages.sh@94 -- # local anon 00:04:40.610 13:28:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.610 13:28:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.610 13:28:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.610 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.610 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.610 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.610 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.610 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.610 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.610 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.610 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5199856 kB' 'MemAvailable: 9518976 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1189904 kB' 'Inactive: 3371424 kB' 'Active(anon): 125408 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3369620 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 135588 kB' 'Mapped: 72988 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 298972 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91896 kB' 'KernelStack: 4448 kB' 'PageTables: 3224 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 607628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14144 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.610 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.610 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.611 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.611 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.611 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.611 13:28:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:40.611 13:28:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.611 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.611 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.611 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.611 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.611 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.611 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.611 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.611 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.611 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.611 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5200156 kB' 'MemAvailable: 9519276 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1190140 kB' 'Inactive: 3371424 kB' 'Active(anon): 125644 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3369620 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 135652 kB' 'Mapped: 72996 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 298972 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91896 kB' 'KernelStack: 4472 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 607628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14144 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.612 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.612 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.613 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.613 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.613 13:28:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:40.613 13:28:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.613 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.613 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.613 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.613 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.613 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.613 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.613 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.613 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.613 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5200424 kB' 'MemAvailable: 9519544 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1190048 kB' 'Inactive: 3371424 kB' 'Active(anon): 125552 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3369620 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 135224 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 298972 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91896 kB' 'KernelStack: 4408 kB' 'PageTables: 3340 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 612456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14160 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.613 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.613 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.614 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.614 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.615 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.615 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.615 13:28:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:40.615 13:28:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.615 nr_hugepages=1024 00:04:40.615 resv_hugepages=0 00:04:40.615 13:28:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.615 13:28:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.615 surplus_hugepages=0 00:04:40.615 13:28:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.615 anon_hugepages=0 00:04:40.615 13:28:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.615 13:28:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.615 13:28:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.615 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.615 13:28:19 -- setup/common.sh@18 -- # local node= 00:04:40.615 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.615 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.615 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.615 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.615 13:28:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.615 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.615 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5200164 kB' 'MemAvailable: 9519284 kB' 'Buffers: 37600 kB' 'Cached: 4407332 kB' 'SwapCached: 0 kB' 'Active: 1189936 kB' 'Inactive: 3371424 kB' 'Active(anon): 125440 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3369620 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 134932 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 207076 kB' 'Slab: 298972 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91896 kB' 'KernelStack: 4412 kB' 'PageTables: 3228 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076968 kB' 'Committed_AS: 617828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14176 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.615 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.615 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.616 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.616 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.616 13:28:19 -- setup/common.sh@33 -- # echo 1024 00:04:40.616 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.616 13:28:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.616 13:28:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.616 13:28:19 -- setup/hugepages.sh@27 -- # local node 00:04:40.616 13:28:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.616 13:28:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.616 13:28:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.616 13:28:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.617 13:28:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.617 13:28:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.617 13:28:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.617 13:28:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.617 13:28:19 -- setup/common.sh@18 -- # local node=0 00:04:40.617 13:28:19 -- setup/common.sh@19 -- # local var val 00:04:40.617 13:28:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.617 13:28:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.617 13:28:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.617 13:28:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.617 13:28:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.617 13:28:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251092 kB' 'MemFree: 5199928 kB' 'MemUsed: 7051164 kB' 'Active: 1189752 kB' 'Inactive: 3371424 kB' 'Active(anon): 125256 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1064496 kB' 'Inactive(file): 3369620 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 4444932 kB' 'Mapped: 72784 kB' 'AnonPages: 134896 kB' 'Shmem: 2616 kB' 'KernelStack: 4364 kB' 'PageTables: 3172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 207076 kB' 'Slab: 299004 kB' 'SReclaimable: 207076 kB' 'SUnreclaim: 91928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.617 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.617 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # continue 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.618 13:28:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.618 13:28:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.618 13:28:19 -- setup/common.sh@33 -- # echo 0 00:04:40.618 13:28:19 -- setup/common.sh@33 -- # return 0 00:04:40.618 13:28:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.618 13:28:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.618 13:28:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.618 13:28:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.618 13:28:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.618 node0=1024 expecting 1024 00:04:40.618 13:28:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.618 00:04:40.618 real 0m1.447s 00:04:40.618 user 0m0.542s 00:04:40.618 sys 0m0.972s 00:04:40.618 13:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.618 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:40.618 ************************************ 00:04:40.618 END TEST no_shrink_alloc 00:04:40.618 ************************************ 00:04:40.618 13:28:19 -- setup/hugepages.sh@217 -- # clear_hp 00:04:40.618 13:28:19 -- setup/hugepages.sh@37 -- # local node hp 00:04:40.618 13:28:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.618 13:28:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.618 13:28:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:40.618 13:28:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.618 13:28:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:40.618 13:28:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:40.618 13:28:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:40.618 00:04:40.618 real 0m6.345s 00:04:40.618 user 0m2.141s 00:04:40.618 sys 0m4.408s 00:04:40.618 13:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.618 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:40.618 ************************************ 00:04:40.618 END TEST hugepages 00:04:40.618 ************************************ 00:04:40.618 13:28:19 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:40.618 13:28:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.618 13:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.618 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:04:40.618 ************************************ 00:04:40.618 START TEST driver 00:04:40.618 ************************************ 00:04:40.618 13:28:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:40.877 * Looking for test storage... 00:04:40.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:40.877 13:28:20 -- setup/driver.sh@68 -- # setup reset 00:04:40.877 13:28:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.877 13:28:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.445 13:28:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:41.445 13:28:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.445 13:28:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.445 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.445 ************************************ 00:04:41.445 START TEST guess_driver 00:04:41.445 ************************************ 00:04:41.445 13:28:20 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:41.445 13:28:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:41.445 13:28:20 -- setup/driver.sh@47 -- # local fail=0 00:04:41.445 13:28:20 -- setup/driver.sh@49 -- # pick_driver 00:04:41.445 13:28:20 -- setup/driver.sh@36 -- # vfio 00:04:41.445 13:28:20 -- setup/driver.sh@21 -- # local iommu_grups 00:04:41.445 13:28:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:41.445 13:28:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:41.445 13:28:20 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:41.445 13:28:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:41.445 13:28:20 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:41.445 13:28:20 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:41.445 13:28:20 -- setup/driver.sh@32 -- # return 1 00:04:41.445 13:28:20 -- setup/driver.sh@38 -- # uio 00:04:41.445 13:28:20 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:41.445 13:28:20 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:41.445 13:28:20 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:41.445 13:28:20 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:41.445 13:28:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:04:41.445 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:41.445 13:28:20 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:41.445 Looking for driver=uio_pci_generic 00:04:41.445 13:28:20 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:41.445 13:28:20 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:41.445 13:28:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:41.445 13:28:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.445 13:28:20 -- setup/driver.sh@45 -- # setup output config 00:04:41.445 13:28:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.445 13:28:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.704 13:28:20 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:41.704 13:28:20 -- setup/driver.sh@58 -- # continue 00:04:41.704 13:28:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.964 13:28:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.964 13:28:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.964 13:28:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.900 13:28:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:42.900 13:28:22 -- setup/driver.sh@65 -- # setup reset 00:04:42.900 13:28:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.900 13:28:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.468 ************************************ 00:04:43.468 END TEST guess_driver 00:04:43.468 ************************************ 00:04:43.468 00:04:43.468 real 0m1.987s 00:04:43.468 user 0m0.482s 00:04:43.468 sys 0m1.514s 00:04:43.468 13:28:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.468 13:28:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.468 ************************************ 00:04:43.468 END TEST driver 00:04:43.468 ************************************ 00:04:43.468 00:04:43.468 real 0m2.631s 00:04:43.468 user 0m0.816s 00:04:43.468 sys 0m1.849s 00:04:43.469 13:28:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.469 13:28:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.469 13:28:22 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:43.469 13:28:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.469 13:28:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.469 13:28:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.469 ************************************ 00:04:43.469 START TEST devices 00:04:43.469 ************************************ 00:04:43.469 13:28:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:43.469 * Looking for test storage... 00:04:43.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.469 13:28:22 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:43.469 13:28:22 -- setup/devices.sh@192 -- # setup reset 00:04:43.469 13:28:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.469 13:28:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.038 13:28:23 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:44.038 13:28:23 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:44.038 13:28:23 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:44.038 13:28:23 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:44.038 13:28:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:44.038 13:28:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:44.038 13:28:23 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:44.038 13:28:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.038 13:28:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:44.038 13:28:23 -- setup/devices.sh@196 -- # blocks=() 00:04:44.038 13:28:23 -- setup/devices.sh@196 -- # declare -a blocks 00:04:44.038 13:28:23 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:44.038 13:28:23 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:44.038 13:28:23 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:44.038 13:28:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.038 13:28:23 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:44.038 13:28:23 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:44.038 13:28:23 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:44.038 13:28:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:44.038 13:28:23 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:44.038 13:28:23 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:44.038 13:28:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:44.038 No valid GPT data, bailing 00:04:44.038 13:28:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.038 13:28:23 -- scripts/common.sh@393 -- # pt= 00:04:44.038 13:28:23 -- scripts/common.sh@394 -- # return 1 00:04:44.038 13:28:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:44.038 13:28:23 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:44.038 13:28:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:44.038 13:28:23 -- setup/common.sh@80 -- # echo 5368709120 00:04:44.038 13:28:23 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:44.038 13:28:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.038 13:28:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:44.038 13:28:23 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:44.038 13:28:23 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:44.038 13:28:23 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:44.038 13:28:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.038 13:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.038 13:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:44.038 ************************************ 00:04:44.038 START TEST nvme_mount 00:04:44.038 ************************************ 00:04:44.038 13:28:23 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:44.038 13:28:23 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:44.038 13:28:23 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:44.038 13:28:23 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.038 13:28:23 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.038 13:28:23 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:44.038 13:28:23 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:44.038 13:28:23 -- setup/common.sh@40 -- # local part_no=1 00:04:44.038 13:28:23 -- setup/common.sh@41 -- # local size=1073741824 00:04:44.038 13:28:23 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.038 13:28:23 -- setup/common.sh@44 -- # parts=() 00:04:44.038 13:28:23 -- setup/common.sh@44 -- # local parts 00:04:44.038 13:28:23 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.038 13:28:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.038 13:28:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.038 13:28:23 -- setup/common.sh@46 -- # (( part++ )) 00:04:44.038 13:28:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.038 13:28:23 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:44.038 13:28:23 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:44.038 13:28:23 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:45.414 Creating new GPT entries in memory. 00:04:45.414 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.414 other utilities. 00:04:45.414 13:28:24 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.414 13:28:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.414 13:28:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.414 13:28:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.414 13:28:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:46.352 Creating new GPT entries in memory. 00:04:46.352 The operation has completed successfully. 00:04:46.352 13:28:25 -- setup/common.sh@57 -- # (( part++ )) 00:04:46.352 13:28:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.352 13:28:25 -- setup/common.sh@62 -- # wait 98367 00:04:46.352 13:28:25 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.352 13:28:25 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:46.352 13:28:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.352 13:28:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:46.352 13:28:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:46.352 13:28:25 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.352 13:28:25 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.352 13:28:25 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:46.352 13:28:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:46.352 13:28:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.352 13:28:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.352 13:28:25 -- setup/devices.sh@53 -- # local found=0 00:04:46.352 13:28:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.352 13:28:25 -- setup/devices.sh@56 -- # : 00:04:46.352 13:28:25 -- setup/devices.sh@59 -- # local pci status 00:04:46.352 13:28:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.352 13:28:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:46.352 13:28:25 -- setup/devices.sh@47 -- # setup output config 00:04:46.352 13:28:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.352 13:28:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.611 13:28:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:46.611 13:28:25 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:46.611 13:28:25 -- setup/devices.sh@63 -- # found=1 00:04:46.611 13:28:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.611 13:28:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:46.611 13:28:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.611 13:28:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:46.611 13:28:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.549 13:28:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.549 13:28:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:47.549 13:28:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.549 13:28:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.549 13:28:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:47.549 13:28:26 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:47.549 13:28:26 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.549 13:28:26 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.549 13:28:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.549 13:28:26 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.549 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.549 13:28:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.549 13:28:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.808 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.808 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.808 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.808 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.808 13:28:26 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:47.808 13:28:26 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:47.808 13:28:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.808 13:28:26 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:47.808 13:28:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:47.808 13:28:26 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.808 13:28:26 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:47.808 13:28:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:47.808 13:28:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:47.808 13:28:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.808 13:28:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:47.808 13:28:26 -- setup/devices.sh@53 -- # local found=0 00:04:47.808 13:28:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.808 13:28:26 -- setup/devices.sh@56 -- # : 00:04:47.808 13:28:26 -- setup/devices.sh@59 -- # local pci status 00:04:47.808 13:28:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.808 13:28:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:47.808 13:28:26 -- setup/devices.sh@47 -- # setup output config 00:04:47.808 13:28:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.808 13:28:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.072 13:28:27 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:48.072 13:28:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:48.072 13:28:27 -- setup/devices.sh@63 -- # found=1 00:04:48.072 13:28:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.072 13:28:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:48.072 13:28:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.072 13:28:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:48.072 13:28:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.007 13:28:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.007 13:28:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:49.007 13:28:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.007 13:28:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.007 13:28:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.007 13:28:28 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.007 13:28:28 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:49.007 13:28:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:49.007 13:28:28 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.007 13:28:28 -- setup/devices.sh@50 -- # local mount_point= 00:04:49.007 13:28:28 -- setup/devices.sh@51 -- # local test_file= 00:04:49.007 13:28:28 -- setup/devices.sh@53 -- # local found=0 00:04:49.007 13:28:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.007 13:28:28 -- setup/devices.sh@59 -- # local pci status 00:04:49.007 13:28:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.007 13:28:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:49.007 13:28:28 -- setup/devices.sh@47 -- # setup output config 00:04:49.007 13:28:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.007 13:28:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.575 13:28:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.575 13:28:28 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:49.575 13:28:28 -- setup/devices.sh@63 -- # found=1 00:04:49.575 13:28:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.575 13:28:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.575 13:28:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.575 13:28:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.575 13:28:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.514 13:28:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.514 13:28:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.514 13:28:29 -- setup/devices.sh@68 -- # return 0 00:04:50.514 13:28:29 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.514 13:28:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.514 13:28:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.514 13:28:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.514 13:28:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.514 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.514 00:04:50.514 real 0m6.392s 00:04:50.514 user 0m0.786s 00:04:50.514 sys 0m3.515s 00:04:50.514 13:28:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.514 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.514 ************************************ 00:04:50.514 END TEST nvme_mount 00:04:50.514 ************************************ 00:04:50.514 13:28:29 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.514 13:28:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.514 13:28:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.514 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.514 ************************************ 00:04:50.514 START TEST dm_mount 00:04:50.514 ************************************ 00:04:50.514 13:28:29 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:50.514 13:28:29 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.514 13:28:29 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.514 13:28:29 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.514 13:28:29 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.514 13:28:29 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.514 13:28:29 -- setup/common.sh@40 -- # local part_no=2 00:04:50.514 13:28:29 -- setup/common.sh@41 -- # local size=1073741824 00:04:50.514 13:28:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.514 13:28:29 -- setup/common.sh@44 -- # parts=() 00:04:50.514 13:28:29 -- setup/common.sh@44 -- # local parts 00:04:50.514 13:28:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.514 13:28:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.514 13:28:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.514 13:28:29 -- setup/common.sh@46 -- # (( part++ )) 00:04:50.514 13:28:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.514 13:28:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.514 13:28:29 -- setup/common.sh@46 -- # (( part++ )) 00:04:50.514 13:28:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.514 13:28:29 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:50.514 13:28:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.514 13:28:29 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:51.891 Creating new GPT entries in memory. 00:04:51.891 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.891 other utilities. 00:04:51.891 13:28:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.891 13:28:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.891 13:28:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.891 13:28:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.891 13:28:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:52.828 Creating new GPT entries in memory. 00:04:52.828 The operation has completed successfully. 00:04:52.828 13:28:31 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.828 13:28:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.828 13:28:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.828 13:28:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.828 13:28:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:53.771 The operation has completed successfully. 00:04:53.771 13:28:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:53.771 13:28:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.771 13:28:33 -- setup/common.sh@62 -- # wait 98881 00:04:53.771 13:28:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:53.771 13:28:33 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.771 13:28:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.771 13:28:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:53.771 13:28:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:53.771 13:28:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.771 13:28:33 -- setup/devices.sh@161 -- # break 00:04:53.771 13:28:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.771 13:28:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:53.771 13:28:33 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:53.771 13:28:33 -- setup/devices.sh@166 -- # dm=dm-0 00:04:53.771 13:28:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:53.771 13:28:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:53.771 13:28:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.771 13:28:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:53.771 13:28:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.771 13:28:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.771 13:28:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:54.033 13:28:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.033 13:28:33 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.033 13:28:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.033 13:28:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:54.033 13:28:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.033 13:28:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.033 13:28:33 -- setup/devices.sh@53 -- # local found=0 00:04:54.033 13:28:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.033 13:28:33 -- setup/devices.sh@56 -- # : 00:04:54.033 13:28:33 -- setup/devices.sh@59 -- # local pci status 00:04:54.033 13:28:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.033 13:28:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.033 13:28:33 -- setup/devices.sh@47 -- # setup output config 00:04:54.033 13:28:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.033 13:28:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.033 13:28:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.033 13:28:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:54.033 13:28:33 -- setup/devices.sh@63 -- # found=1 00:04:54.033 13:28:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.291 13:28:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.291 13:28:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.291 13:28:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.291 13:28:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.225 13:28:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.225 13:28:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:55.225 13:28:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:55.225 13:28:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:55.225 13:28:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:55.225 13:28:34 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:55.225 13:28:34 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:55.225 13:28:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:55.225 13:28:34 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:55.225 13:28:34 -- setup/devices.sh@50 -- # local mount_point= 00:04:55.225 13:28:34 -- setup/devices.sh@51 -- # local test_file= 00:04:55.225 13:28:34 -- setup/devices.sh@53 -- # local found=0 00:04:55.225 13:28:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.225 13:28:34 -- setup/devices.sh@59 -- # local pci status 00:04:55.225 13:28:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.225 13:28:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:55.225 13:28:34 -- setup/devices.sh@47 -- # setup output config 00:04:55.225 13:28:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.226 13:28:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.484 13:28:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.484 13:28:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:55.484 13:28:34 -- setup/devices.sh@63 -- # found=1 00:04:55.484 13:28:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.484 13:28:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.484 13:28:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.484 13:28:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.484 13:28:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.420 13:28:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.420 13:28:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.420 13:28:35 -- setup/devices.sh@68 -- # return 0 00:04:56.420 13:28:35 -- setup/devices.sh@187 -- # cleanup_dm 00:04:56.420 13:28:35 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:56.679 13:28:35 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.679 13:28:35 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:56.679 13:28:35 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.679 13:28:35 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:56.679 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.679 13:28:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.679 13:28:35 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:56.679 00:04:56.679 real 0m6.074s 00:04:56.679 user 0m0.496s 00:04:56.679 sys 0m2.362s 00:04:56.679 13:28:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.679 13:28:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.679 ************************************ 00:04:56.679 END TEST dm_mount 00:04:56.679 ************************************ 00:04:56.679 13:28:35 -- setup/devices.sh@1 -- # cleanup 00:04:56.679 13:28:35 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:56.679 13:28:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.679 13:28:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.679 13:28:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.679 13:28:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.679 13:28:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.679 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:56.679 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:56.679 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.679 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.679 13:28:35 -- setup/devices.sh@12 -- # cleanup_dm 00:04:56.679 13:28:35 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:56.679 13:28:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:56.679 13:28:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.679 13:28:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:56.679 13:28:36 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.679 13:28:36 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:56.679 ************************************ 00:04:56.679 END TEST devices 00:04:56.679 ************************************ 00:04:56.679 00:04:56.679 real 0m13.380s 00:04:56.679 user 0m1.730s 00:04:56.679 sys 0m6.305s 00:04:56.679 13:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.679 13:28:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.937 00:04:56.937 real 0m28.053s 00:04:56.937 user 0m6.518s 00:04:56.937 sys 0m16.609s 00:04:56.937 13:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.937 13:28:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.937 ************************************ 00:04:56.937 END TEST setup.sh 00:04:56.937 ************************************ 00:04:56.937 13:28:36 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:56.937 Hugepages 00:04:56.937 node hugesize free / total 00:04:56.937 node0 1048576kB 0 / 0 00:04:56.937 node0 2048kB 2048 / 2048 00:04:56.937 00:04:56.937 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:57.195 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:57.195 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:57.195 13:28:36 -- spdk/autotest.sh@141 -- # uname -s 00:04:57.195 13:28:36 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:57.195 13:28:36 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:57.195 13:28:36 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:57.761 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.697 13:28:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:59.632 13:28:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:59.632 13:28:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:59.632 13:28:38 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:59.632 13:28:38 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:59.632 13:28:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:59.632 13:28:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:59.632 13:28:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.632 13:28:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:59.632 13:28:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:59.889 13:28:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:59.889 13:28:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:59.889 13:28:39 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.148 Waiting for block devices as requested 00:05:00.148 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:00.511 13:28:39 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:00.511 13:28:39 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:00.511 13:28:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:00.511 13:28:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:00.511 13:28:39 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:00.511 13:28:39 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:00.511 13:28:39 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:00.511 13:28:39 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:00.511 13:28:39 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:00.511 13:28:39 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:00.511 13:28:39 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:00.511 13:28:39 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:00.511 13:28:39 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:00.511 13:28:39 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:00.511 13:28:39 -- common/autotest_common.sh@1542 -- # continue 00:05:00.511 13:28:39 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:00.511 13:28:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:00.511 13:28:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 13:28:39 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:00.511 13:28:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:00.511 13:28:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 13:28:39 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.030 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.968 13:28:41 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:01.968 13:28:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:01.968 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.968 13:28:41 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:01.968 13:28:41 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:01.968 13:28:41 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:01.968 13:28:41 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:01.968 13:28:41 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:01.968 13:28:41 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:01.968 13:28:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:01.968 13:28:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:01.968 13:28:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.968 13:28:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.968 13:28:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:01.968 13:28:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:01.968 13:28:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:01.968 13:28:41 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:01.968 13:28:41 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:01.968 13:28:41 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:01.968 13:28:41 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:01.968 13:28:41 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:01.968 13:28:41 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:01.968 13:28:41 -- common/autotest_common.sh@1578 -- # return 0 00:05:01.968 13:28:41 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:01.968 13:28:41 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:01.968 13:28:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.968 13:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.968 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.968 ************************************ 00:05:01.968 START TEST unittest 00:05:01.968 ************************************ 00:05:01.968 13:28:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:01.968 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:01.968 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:01.968 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:01.968 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:01.968 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:01.968 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:01.968 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:01.968 ++ rpc_py=rpc_cmd 00:05:01.968 ++ set -e 00:05:01.968 ++ shopt -s nullglob 00:05:01.968 ++ shopt -s extglob 00:05:01.968 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:01.968 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:01.968 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:01.968 +++ CONFIG_FIO_PLUGIN=y 00:05:01.968 +++ CONFIG_NVME_CUSE=y 00:05:01.968 +++ CONFIG_RAID5F=y 00:05:01.968 +++ CONFIG_LTO=n 00:05:01.968 +++ CONFIG_SMA=n 00:05:01.968 +++ CONFIG_ISAL=y 00:05:01.968 +++ CONFIG_OPENSSL_PATH= 00:05:01.968 +++ CONFIG_IDXD_KERNEL=n 00:05:01.968 +++ CONFIG_URING_PATH= 00:05:01.968 +++ CONFIG_DAOS=n 00:05:01.968 +++ CONFIG_DPDK_LIB_DIR= 00:05:01.968 +++ CONFIG_OCF=n 00:05:01.968 +++ CONFIG_EXAMPLES=y 00:05:01.968 +++ CONFIG_RDMA_PROV=verbs 00:05:01.968 +++ CONFIG_ISCSI_INITIATOR=y 00:05:01.968 +++ CONFIG_VTUNE=n 00:05:01.968 +++ CONFIG_DPDK_INC_DIR= 00:05:01.968 +++ CONFIG_CET=n 00:05:01.968 +++ CONFIG_TESTS=y 00:05:01.968 +++ CONFIG_APPS=y 00:05:01.968 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:01.968 +++ CONFIG_DAOS_DIR= 00:05:01.968 +++ CONFIG_CRYPTO_MLX5=n 00:05:01.968 +++ CONFIG_XNVME=n 00:05:01.968 +++ CONFIG_UNIT_TESTS=y 00:05:01.968 +++ CONFIG_FUSE=n 00:05:01.968 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:01.968 +++ CONFIG_OCF_PATH= 00:05:01.968 +++ CONFIG_WPDK_DIR= 00:05:01.968 +++ CONFIG_VFIO_USER=n 00:05:01.968 +++ CONFIG_MAX_LCORES= 00:05:01.968 +++ CONFIG_ARCH=native 00:05:01.968 +++ CONFIG_TSAN=n 00:05:01.968 +++ CONFIG_VIRTIO=y 00:05:01.968 +++ CONFIG_IPSEC_MB=n 00:05:01.968 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:01.968 +++ CONFIG_ASAN=y 00:05:01.968 +++ CONFIG_SHARED=n 00:05:01.968 +++ CONFIG_VTUNE_DIR= 00:05:01.968 +++ CONFIG_RDMA_SET_TOS=y 00:05:01.968 +++ CONFIG_VBDEV_COMPRESS=n 00:05:01.968 +++ CONFIG_VFIO_USER_DIR= 00:05:01.968 +++ CONFIG_FUZZER_LIB= 00:05:01.968 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:01.968 +++ CONFIG_USDT=n 00:05:01.968 +++ CONFIG_URING_ZNS=n 00:05:01.968 +++ CONFIG_FC_PATH= 00:05:01.968 +++ CONFIG_COVERAGE=y 00:05:01.968 +++ CONFIG_CUSTOMOCF=n 00:05:01.968 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:01.968 +++ CONFIG_WERROR=y 00:05:01.968 +++ CONFIG_DEBUG=y 00:05:01.968 +++ CONFIG_RDMA=y 00:05:01.968 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:01.968 +++ CONFIG_FUZZER=n 00:05:01.968 +++ CONFIG_FC=n 00:05:01.968 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:01.968 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:01.968 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:01.968 +++ CONFIG_CROSS_PREFIX= 00:05:01.968 +++ CONFIG_PREFIX=/usr/local 00:05:01.968 +++ CONFIG_HAVE_LIBBSD=n 00:05:01.968 +++ CONFIG_UBSAN=y 00:05:01.968 +++ CONFIG_PGO_CAPTURE=n 00:05:01.968 +++ CONFIG_UBLK=n 00:05:01.968 +++ CONFIG_ISAL_CRYPTO=y 00:05:01.968 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:01.968 +++ CONFIG_CRYPTO=n 00:05:01.968 +++ CONFIG_RBD=n 00:05:01.968 +++ CONFIG_LIBDIR= 00:05:01.968 +++ CONFIG_IPSEC_MB_DIR= 00:05:01.968 +++ CONFIG_PGO_USE=n 00:05:01.968 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:01.968 +++ CONFIG_GOLANG=n 00:05:01.968 +++ CONFIG_VHOST=y 00:05:01.968 +++ CONFIG_IDXD=y 00:05:01.968 +++ CONFIG_AVAHI=n 00:05:01.968 +++ CONFIG_URING=n 00:05:01.968 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:01.968 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:01.968 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:01.968 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:01.968 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:01.968 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:01.968 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:01.968 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:01.968 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:01.968 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:01.968 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:01.968 +++ VHOST_APP=("$_app_dir/vhost") 00:05:01.968 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:01.968 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:01.968 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:01.968 +++ [[ #ifndef SPDK_CONFIG_H 00:05:01.968 #define SPDK_CONFIG_H 00:05:01.968 #define SPDK_CONFIG_APPS 1 00:05:01.968 #define SPDK_CONFIG_ARCH native 00:05:01.968 #define SPDK_CONFIG_ASAN 1 00:05:01.968 #undef SPDK_CONFIG_AVAHI 00:05:01.968 #undef SPDK_CONFIG_CET 00:05:01.968 #define SPDK_CONFIG_COVERAGE 1 00:05:01.968 #define SPDK_CONFIG_CROSS_PREFIX 00:05:01.968 #undef SPDK_CONFIG_CRYPTO 00:05:01.968 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:01.968 #undef SPDK_CONFIG_CUSTOMOCF 00:05:01.968 #undef SPDK_CONFIG_DAOS 00:05:01.968 #define SPDK_CONFIG_DAOS_DIR 00:05:01.968 #define SPDK_CONFIG_DEBUG 1 00:05:01.968 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:01.968 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:01.968 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:01.968 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:01.968 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:01.968 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:01.968 #define SPDK_CONFIG_EXAMPLES 1 00:05:01.968 #undef SPDK_CONFIG_FC 00:05:01.968 #define SPDK_CONFIG_FC_PATH 00:05:01.968 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:01.968 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:01.968 #undef SPDK_CONFIG_FUSE 00:05:01.968 #undef SPDK_CONFIG_FUZZER 00:05:01.968 #define SPDK_CONFIG_FUZZER_LIB 00:05:01.968 #undef SPDK_CONFIG_GOLANG 00:05:01.968 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:01.968 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:01.968 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:01.968 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:01.968 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:01.968 #define SPDK_CONFIG_IDXD 1 00:05:01.968 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:01.968 #undef SPDK_CONFIG_IPSEC_MB 00:05:01.968 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:01.968 #define SPDK_CONFIG_ISAL 1 00:05:01.968 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:01.968 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:01.968 #define SPDK_CONFIG_LIBDIR 00:05:01.968 #undef SPDK_CONFIG_LTO 00:05:01.968 #define SPDK_CONFIG_MAX_LCORES 00:05:01.968 #define SPDK_CONFIG_NVME_CUSE 1 00:05:01.968 #undef SPDK_CONFIG_OCF 00:05:01.968 #define SPDK_CONFIG_OCF_PATH 00:05:01.968 #define SPDK_CONFIG_OPENSSL_PATH 00:05:01.968 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:01.968 #undef SPDK_CONFIG_PGO_USE 00:05:01.968 #define SPDK_CONFIG_PREFIX /usr/local 00:05:01.968 #define SPDK_CONFIG_RAID5F 1 00:05:01.968 #undef SPDK_CONFIG_RBD 00:05:01.968 #define SPDK_CONFIG_RDMA 1 00:05:01.968 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:01.968 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:01.968 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:01.968 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:01.968 #undef SPDK_CONFIG_SHARED 00:05:01.968 #undef SPDK_CONFIG_SMA 00:05:01.968 #define SPDK_CONFIG_TESTS 1 00:05:01.968 #undef SPDK_CONFIG_TSAN 00:05:01.968 #undef SPDK_CONFIG_UBLK 00:05:01.968 #define SPDK_CONFIG_UBSAN 1 00:05:01.968 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:01.968 #undef SPDK_CONFIG_URING 00:05:01.968 #define SPDK_CONFIG_URING_PATH 00:05:01.968 #undef SPDK_CONFIG_URING_ZNS 00:05:01.968 #undef SPDK_CONFIG_USDT 00:05:01.968 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:01.968 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:01.968 #undef SPDK_CONFIG_VFIO_USER 00:05:01.968 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:01.968 #define SPDK_CONFIG_VHOST 1 00:05:01.968 #define SPDK_CONFIG_VIRTIO 1 00:05:01.968 #undef SPDK_CONFIG_VTUNE 00:05:01.968 #define SPDK_CONFIG_VTUNE_DIR 00:05:01.968 #define SPDK_CONFIG_WERROR 1 00:05:01.968 #define SPDK_CONFIG_WPDK_DIR 00:05:01.968 #undef SPDK_CONFIG_XNVME 00:05:01.968 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:01.968 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:01.968 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.968 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:01.968 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.968 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.968 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:01.969 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:01.969 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:01.969 ++++ export PATH 00:05:01.969 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:01.969 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:01.969 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:01.969 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:01.969 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:01.969 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:01.969 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:01.969 +++ TEST_TAG=N/A 00:05:01.969 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:01.969 ++ : 1 00:05:01.969 ++ export RUN_NIGHTLY 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_RUN_VALGRIND 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_TEST_UNITTEST 00:05:01.969 ++ : 00:05:01.969 ++ export SPDK_TEST_AUTOBUILD 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_RELEASE_BUILD 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_ISAL 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_ISCSI 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_TEST_NVME 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVME_PMR 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVME_BP 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVME_CLI 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVME_CUSE 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVME_FDP 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVMF 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_VFIOUSER 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_FUZZER 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_FUZZER_SHORT 00:05:01.969 ++ : rdma 00:05:01.969 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_RBD 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_VHOST 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_TEST_BLOCKDEV 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_IOAT 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_BLOBFS 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_VHOST_INIT 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_LVOL 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_RUN_ASAN 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_RUN_UBSAN 00:05:01.969 ++ : 00:05:01.969 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_RUN_NON_ROOT 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_CRYPTO 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_FTL 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_OCF 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_VMD 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_OPAL 00:05:01.969 ++ : 00:05:01.969 ++ export SPDK_TEST_NATIVE_DPDK 00:05:01.969 ++ : true 00:05:01.969 ++ export SPDK_AUTOTEST_X 00:05:01.969 ++ : 1 00:05:01.969 ++ export SPDK_TEST_RAID5 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_URING 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_USDT 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_USE_IGB_UIO 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_SCHEDULER 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_SCANBUILD 00:05:01.969 ++ : 00:05:01.969 ++ export SPDK_TEST_NVMF_NICS 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_SMA 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_DAOS 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_XNVME 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_ACCEL_DSA 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_ACCEL_IAA 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_ACCEL_IOAT 00:05:01.969 ++ : 00:05:01.969 ++ export SPDK_TEST_FUZZER_TARGET 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_TEST_NVMF_MDNS 00:05:01.969 ++ : 0 00:05:01.969 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:01.969 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:01.969 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:01.969 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:01.969 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:01.969 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:01.969 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:01.969 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:01.969 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:01.969 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:01.969 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:01.969 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:01.969 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:01.969 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:01.969 ++ PYTHONDONTWRITEBYTECODE=1 00:05:01.969 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:01.969 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:01.969 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:01.969 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:01.969 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:01.969 ++ rm -rf /var/tmp/asan_suppression_file 00:05:01.969 ++ cat 00:05:01.969 ++ echo leak:libfuse3.so 00:05:01.969 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:01.969 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:01.969 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:01.969 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:01.969 ++ '[' -z /var/spdk/dependencies ']' 00:05:01.969 ++ export DEPENDENCY_DIR 00:05:01.969 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:01.969 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:01.969 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:01.969 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:01.969 ++ export QEMU_BIN= 00:05:01.969 ++ QEMU_BIN= 00:05:01.969 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:01.969 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:01.969 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:01.969 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:01.969 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:01.969 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:01.969 ++ '[' 0 -eq 0 ']' 00:05:01.969 ++ export valgrind= 00:05:01.969 ++ valgrind= 00:05:01.969 +++ uname -s 00:05:01.969 ++ '[' Linux = Linux ']' 00:05:01.969 ++ HUGEMEM=4096 00:05:01.969 ++ export CLEAR_HUGE=yes 00:05:01.969 ++ CLEAR_HUGE=yes 00:05:01.969 ++ [[ 0 -eq 1 ]] 00:05:01.969 ++ [[ 0 -eq 1 ]] 00:05:01.969 ++ MAKE=make 00:05:01.969 +++ nproc 00:05:01.969 ++ MAKEFLAGS=-j10 00:05:01.969 ++ export HUGEMEM=4096 00:05:01.969 ++ HUGEMEM=4096 00:05:01.969 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:01.969 ++ NO_HUGE=() 00:05:01.969 ++ TEST_MODE= 00:05:01.969 ++ [[ -z '' ]] 00:05:01.969 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:01.969 ++ exec 00:05:01.969 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:01.969 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:01.969 ++ set_test_storage 2147483648 00:05:01.969 ++ [[ -v testdir ]] 00:05:01.969 ++ local requested_size=2147483648 00:05:01.969 ++ local mount target_dir 00:05:01.969 ++ local -A mounts fss sizes avails uses 00:05:01.969 ++ local source fs size avail mount use 00:05:01.969 ++ local storage_fallback storage_candidates 00:05:01.969 +++ mktemp -udt spdk.XXXXXX 00:05:01.969 ++ storage_fallback=/tmp/spdk.5IBBT8 00:05:01.969 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:01.969 ++ [[ -n '' ]] 00:05:01.969 ++ [[ -n '' ]] 00:05:01.969 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.5IBBT8/tests/unit /tmp/spdk.5IBBT8 00:05:01.969 ++ requested_size=2214592512 00:05:01.969 ++ read -r source fs size use avail _ mount 00:05:01.969 +++ df -T 00:05:01.969 +++ grep -v Filesystem 00:05:01.969 ++ mounts["$mount"]=udev 00:05:01.969 ++ fss["$mount"]=devtmpfs 00:05:01.969 ++ avails["$mount"]=6224457728 00:05:01.969 ++ sizes["$mount"]=6224457728 00:05:01.969 ++ uses["$mount"]=0 00:05:01.969 ++ read -r source fs size use avail _ mount 00:05:01.969 ++ mounts["$mount"]=tmpfs 00:05:01.969 ++ fss["$mount"]=tmpfs 00:05:01.969 ++ avails["$mount"]=1253408768 00:05:01.969 ++ sizes["$mount"]=1254514688 00:05:01.969 ++ uses["$mount"]=1105920 00:05:01.969 ++ read -r source fs size use avail _ mount 00:05:01.969 ++ mounts["$mount"]=/dev/vda1 00:05:01.969 ++ fss["$mount"]=ext4 00:05:01.969 ++ avails["$mount"]=10735169536 00:05:01.969 ++ sizes["$mount"]=20616794112 00:05:01.969 ++ uses["$mount"]=9864847360 00:05:01.969 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=tmpfs 00:05:01.970 ++ fss["$mount"]=tmpfs 00:05:01.970 ++ avails["$mount"]=6272557056 00:05:01.970 ++ sizes["$mount"]=6272557056 00:05:01.970 ++ uses["$mount"]=0 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=tmpfs 00:05:01.970 ++ fss["$mount"]=tmpfs 00:05:01.970 ++ avails["$mount"]=5242880 00:05:01.970 ++ sizes["$mount"]=5242880 00:05:01.970 ++ uses["$mount"]=0 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=tmpfs 00:05:01.970 ++ fss["$mount"]=tmpfs 00:05:01.970 ++ avails["$mount"]=6272557056 00:05:01.970 ++ sizes["$mount"]=6272557056 00:05:01.970 ++ uses["$mount"]=0 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=/dev/loop0 00:05:01.970 ++ fss["$mount"]=squashfs 00:05:01.970 ++ avails["$mount"]=0 00:05:01.970 ++ sizes["$mount"]=67108864 00:05:01.970 ++ uses["$mount"]=67108864 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=/dev/loop1 00:05:01.970 ++ fss["$mount"]=squashfs 00:05:01.970 ++ avails["$mount"]=0 00:05:01.970 ++ sizes["$mount"]=41025536 00:05:01.970 ++ uses["$mount"]=41025536 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=/dev/loop2 00:05:01.970 ++ fss["$mount"]=squashfs 00:05:01.970 ++ avails["$mount"]=0 00:05:01.970 ++ sizes["$mount"]=96337920 00:05:01.970 ++ uses["$mount"]=96337920 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=/dev/vda15 00:05:01.970 ++ fss["$mount"]=vfat 00:05:01.970 ++ avails["$mount"]=103089152 00:05:01.970 ++ sizes["$mount"]=109422592 00:05:01.970 ++ uses["$mount"]=6334464 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=tmpfs 00:05:01.970 ++ fss["$mount"]=tmpfs 00:05:01.970 ++ avails["$mount"]=1254510592 00:05:01.970 ++ sizes["$mount"]=1254510592 00:05:01.970 ++ uses["$mount"]=0 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:05:01.970 ++ fss["$mount"]=fuse.sshfs 00:05:01.970 ++ avails["$mount"]=94464266240 00:05:01.970 ++ sizes["$mount"]=105088212992 00:05:01.970 ++ uses["$mount"]=5238513664 00:05:01.970 ++ read -r source fs size use avail _ mount 00:05:01.970 ++ printf '* Looking for test storage...\n' 00:05:01.970 * Looking for test storage... 00:05:01.970 ++ local target_space new_size 00:05:01.970 ++ for target_dir in "${storage_candidates[@]}" 00:05:01.970 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:01.970 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:01.970 ++ mount=/ 00:05:01.970 ++ target_space=10735169536 00:05:01.970 ++ (( target_space == 0 || target_space < requested_size )) 00:05:01.970 ++ (( target_space >= requested_size )) 00:05:01.970 ++ [[ ext4 == tmpfs ]] 00:05:01.970 ++ [[ ext4 == ramfs ]] 00:05:01.970 ++ [[ / == / ]] 00:05:01.970 ++ new_size=12079439872 00:05:01.970 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:01.970 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:01.970 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:01.970 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:01.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:01.970 ++ return 0 00:05:01.970 ++ set -o errtrace 00:05:01.970 ++ shopt -s extdebug 00:05:01.970 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:01.970 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:01.970 13:28:41 -- common/autotest_common.sh@1672 -- # true 00:05:01.970 13:28:41 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:01.970 13:28:41 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:01.970 13:28:41 -- common/autotest_common.sh@29 -- # exec 00:05:01.970 13:28:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:01.970 13:28:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:01.970 13:28:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:01.970 13:28:41 -- common/autotest_common.sh@18 -- # set -x 00:05:01.970 13:28:41 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:01.970 13:28:41 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:01.970 13:28:41 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:01.970 13:28:41 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:01.970 13:28:41 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:01.970 13:28:41 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:01.970 13:28:41 -- unit/unittest.sh@179 -- # hash lcov 00:05:01.970 13:28:41 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:01.970 13:28:41 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:01.970 13:28:41 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:01.970 13:28:41 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:01.970 13:28:41 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:01.970 13:28:41 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:01.970 13:28:41 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:01.970 13:28:41 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:01.970 --rc lcov_branch_coverage=1 00:05:01.970 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 ' 00:05:01.970 13:28:41 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:01.970 --rc lcov_branch_coverage=1 00:05:01.970 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 ' 00:05:01.970 13:28:41 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:01.970 --rc lcov_branch_coverage=1 00:05:01.970 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 --no-external' 00:05:01.970 13:28:41 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:01.970 --rc lcov_branch_coverage=1 00:05:01.970 --rc lcov_function_coverage=1 00:05:01.970 --rc genhtml_branch_coverage=1 00:05:01.970 --rc genhtml_function_coverage=1 00:05:01.970 --rc genhtml_legend=1 00:05:01.970 --rc geninfo_all_blocks=1 00:05:01.970 --no-external' 00:05:01.970 13:28:41 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:03.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:03.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:03.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:03.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:04.135 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:04.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:04.395 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:04.395 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:43.128 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:43.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:43.128 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:43.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:43.128 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:43.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:47.323 13:29:26 -- unit/unittest.sh@206 -- # uname -m 00:05:47.323 13:29:26 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:47.323 13:29:26 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:47.323 13:29:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.323 13:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.323 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.323 ************************************ 00:05:47.323 START TEST unittest_pci_event 00:05:47.323 ************************************ 00:05:47.323 13:29:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:47.323 00:05:47.323 00:05:47.323 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.323 http://cunit.sourceforge.net/ 00:05:47.323 00:05:47.323 00:05:47.323 Suite: pci_event 00:05:47.323 Test: test_pci_parse_event ...[2024-07-10 13:29:26.094904] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:47.323 [2024-07-10 13:29:26.095489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:47.323 passed 00:05:47.323 00:05:47.323 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.323 suites 1 1 n/a 0 0 00:05:47.323 tests 1 1 1 0 0 00:05:47.323 asserts 15 15 15 0 n/a 00:05:47.323 00:05:47.323 Elapsed time = 0.001 seconds 00:05:47.323 00:05:47.323 real 0m0.053s 00:05:47.323 user 0m0.024s 00:05:47.323 sys 0m0.024s 00:05:47.323 13:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.323 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.323 ************************************ 00:05:47.323 END TEST unittest_pci_event 00:05:47.323 ************************************ 00:05:47.323 13:29:26 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:47.323 13:29:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.323 13:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.323 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.323 ************************************ 00:05:47.323 START TEST unittest_include 00:05:47.323 ************************************ 00:05:47.323 13:29:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:47.323 00:05:47.323 00:05:47.323 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.323 http://cunit.sourceforge.net/ 00:05:47.323 00:05:47.323 00:05:47.323 Suite: histogram 00:05:47.323 Test: histogram_test ...passed 00:05:47.323 Test: histogram_merge ...passed 00:05:47.323 00:05:47.323 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.323 suites 1 1 n/a 0 0 00:05:47.323 tests 2 2 2 0 0 00:05:47.323 asserts 50 50 50 0 n/a 00:05:47.323 00:05:47.323 Elapsed time = 0.008 seconds 00:05:47.323 00:05:47.323 real 0m0.052s 00:05:47.323 user 0m0.028s 00:05:47.323 sys 0m0.025s 00:05:47.323 13:29:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.323 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.323 ************************************ 00:05:47.323 END TEST unittest_include 00:05:47.323 ************************************ 00:05:47.323 13:29:26 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:47.323 13:29:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.323 13:29:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.323 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.323 ************************************ 00:05:47.323 START TEST unittest_bdev 00:05:47.323 ************************************ 00:05:47.323 13:29:26 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:47.323 13:29:26 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:47.323 00:05:47.323 00:05:47.323 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.323 http://cunit.sourceforge.net/ 00:05:47.323 00:05:47.323 00:05:47.323 Suite: bdev 00:05:47.323 Test: bytes_to_blocks_test ...passed 00:05:47.323 Test: num_blocks_test ...passed 00:05:47.323 Test: io_valid_test ...passed 00:05:47.323 Test: open_write_test ...[2024-07-10 13:29:26.386305] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:47.323 [2024-07-10 13:29:26.386613] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:47.323 [2024-07-10 13:29:26.386717] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:47.323 passed 00:05:47.323 Test: claim_test ...passed 00:05:47.323 Test: alias_add_del_test ...[2024-07-10 13:29:26.452921] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:47.323 [2024-07-10 13:29:26.453077] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:47.323 [2024-07-10 13:29:26.453126] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:47.323 passed 00:05:47.323 Test: get_device_stat_test ...passed 00:05:47.323 Test: bdev_io_types_test ...passed 00:05:47.323 Test: bdev_io_wait_test ...passed 00:05:47.323 Test: bdev_io_spans_split_test ...passed 00:05:47.323 Test: bdev_io_boundary_split_test ...passed 00:05:47.323 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-10 13:29:26.595624] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:47.323 passed 00:05:47.323 Test: bdev_io_mix_split_test ...passed 00:05:47.323 Test: bdev_io_split_with_io_wait ...passed 00:05:47.582 Test: bdev_io_write_unit_split_test ...[2024-07-10 13:29:26.700429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:47.582 [2024-07-10 13:29:26.700598] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:47.582 [2024-07-10 13:29:26.700632] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:47.582 [2024-07-10 13:29:26.700686] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:47.582 passed 00:05:47.582 Test: bdev_io_alignment_with_boundary ...passed 00:05:47.582 Test: bdev_io_alignment ...passed 00:05:47.582 Test: bdev_histograms ...passed 00:05:47.582 Test: bdev_write_zeroes ...passed 00:05:47.582 Test: bdev_compare_and_write ...passed 00:05:47.841 Test: bdev_compare ...passed 00:05:47.841 Test: bdev_compare_emulated ...passed 00:05:47.841 Test: bdev_zcopy_write ...passed 00:05:47.841 Test: bdev_zcopy_read ...passed 00:05:47.841 Test: bdev_open_while_hotremove ...passed 00:05:47.841 Test: bdev_close_while_hotremove ...passed 00:05:47.841 Test: bdev_open_ext_test ...[2024-07-10 13:29:27.084711] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:47.841 passed 00:05:47.841 Test: bdev_open_ext_unregister ...[2024-07-10 13:29:27.084953] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:47.841 passed 00:05:47.841 Test: bdev_set_io_timeout ...passed 00:05:47.841 Test: bdev_set_qd_sampling ...passed 00:05:47.841 Test: lba_range_overlap ...passed 00:05:47.841 Test: lock_lba_range_check_ranges ...passed 00:05:48.099 Test: lock_lba_range_with_io_outstanding ...passed 00:05:48.099 Test: lock_lba_range_overlapped ...passed 00:05:48.099 Test: bdev_quiesce ...[2024-07-10 13:29:27.268481] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:48.099 passed 00:05:48.099 Test: bdev_io_abort ...passed 00:05:48.099 Test: bdev_unmap ...passed 00:05:48.099 Test: bdev_write_zeroes_split_test ...passed 00:05:48.099 Test: bdev_set_options_test ...passed 00:05:48.099 Test: bdev_get_memory_domains ...passed 00:05:48.099 Test: bdev_io_ext ...[2024-07-10 13:29:27.379200] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:48.099 passed 00:05:48.099 Test: bdev_io_ext_no_opts ...passed 00:05:48.359 Test: bdev_io_ext_invalid_opts ...passed 00:05:48.359 Test: bdev_io_ext_split ...passed 00:05:48.359 Test: bdev_io_ext_bounce_buffer ...passed 00:05:48.359 Test: bdev_register_uuid_alias ...[2024-07-10 13:29:27.554684] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name c16948d8-e9f0-41d3-82e2-04d968e607ec already exists 00:05:48.359 [2024-07-10 13:29:27.554810] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:c16948d8-e9f0-41d3-82e2-04d968e607ec alias for bdev bdev0 00:05:48.359 passed 00:05:48.359 Test: bdev_unregister_by_name ...[2024-07-10 13:29:27.571967] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:48.359 [2024-07-10 13:29:27.572040] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:48.359 passed 00:05:48.359 Test: for_each_bdev_test ...passed 00:05:48.359 Test: bdev_seek_test ...passed 00:05:48.359 Test: bdev_copy ...passed 00:05:48.359 Test: bdev_copy_split_test ...passed 00:05:48.359 Test: examine_locks ...passed 00:05:48.359 Test: claim_v2_rwo ...[2024-07-10 13:29:27.676984] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677066] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677098] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677163] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677244] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:48.359 passed 00:05:48.359 Test: claim_v2_rom ...[2024-07-10 13:29:27.677413] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677475] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677511] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677546] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:48.359 [2024-07-10 13:29:27.677597] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:48.359 [2024-07-10 13:29:27.677642] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:48.359 passed 00:05:48.359 Test: claim_v2_rwm ...[2024-07-10 13:29:27.677774] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:48.360 [2024-07-10 13:29:27.677842] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.677892] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.677930] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.677959] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.677996] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.678052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:48.360 passed 00:05:48.360 Test: claim_v2_existing_writer ...[2024-07-10 13:29:27.678210] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:48.360 [2024-07-10 13:29:27.678253] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:48.360 passed 00:05:48.360 Test: claim_v2_existing_v1 ...[2024-07-10 13:29:27.678401] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.678444] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.678473] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:48.360 passed 00:05:48.360 Test: claim_v1_existing_v2 ...[2024-07-10 13:29:27.678608] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.678660] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:48.360 [2024-07-10 13:29:27.678706] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:48.360 passed 00:05:48.360 Test: examine_claimed ...[2024-07-10 13:29:27.678971] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:48.360 passed 00:05:48.360 00:05:48.360 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.360 suites 1 1 n/a 0 0 00:05:48.360 tests 59 59 59 0 0 00:05:48.360 asserts 4599 4599 4599 0 n/a 00:05:48.360 00:05:48.360 Elapsed time = 1.341 seconds 00:05:48.360 13:29:27 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:48.619 00:05:48.619 00:05:48.619 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.619 http://cunit.sourceforge.net/ 00:05:48.619 00:05:48.619 00:05:48.619 Suite: nvme 00:05:48.619 Test: test_create_ctrlr ...passed 00:05:48.619 Test: test_reset_ctrlr ...[2024-07-10 13:29:27.736685] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 passed 00:05:48.619 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:48.619 Test: test_failover_ctrlr ...passed 00:05:48.619 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-10 13:29:27.739347] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.739545] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.739716] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 passed 00:05:48.619 Test: test_pending_reset ...[2024-07-10 13:29:27.740970] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.741211] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 passed 00:05:48.619 Test: test_attach_ctrlr ...[2024-07-10 13:29:27.742048] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:48.619 passed 00:05:48.619 Test: test_aer_cb ...passed 00:05:48.619 Test: test_submit_nvme_cmd ...passed 00:05:48.619 Test: test_add_remove_trid ...passed 00:05:48.619 Test: test_abort ...[2024-07-10 13:29:27.744671] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:48.619 passed 00:05:48.619 Test: test_get_io_qpair ...passed 00:05:48.619 Test: test_bdev_unregister ...passed 00:05:48.619 Test: test_compare_ns ...passed 00:05:48.619 Test: test_init_ana_log_page ...passed 00:05:48.619 Test: test_get_memory_domains ...passed 00:05:48.619 Test: test_reconnect_qpair ...[2024-07-10 13:29:27.746994] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 passed 00:05:48.619 Test: test_create_bdev_ctrlr ...[2024-07-10 13:29:27.747454] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:48.619 passed 00:05:48.619 Test: test_add_multi_ns_to_bdev ...[2024-07-10 13:29:27.748477] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:48.619 passed 00:05:48.619 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:48.619 Test: test_admin_path ...passed 00:05:48.619 Test: test_reset_bdev_ctrlr ...passed 00:05:48.619 Test: test_find_io_path ...passed 00:05:48.619 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:48.619 Test: test_retry_io_for_io_path_error ...passed 00:05:48.619 Test: test_retry_io_count ...passed 00:05:48.619 Test: test_concurrent_read_ana_log_page ...passed 00:05:48.619 Test: test_retry_io_for_ana_error ...passed 00:05:48.619 Test: test_check_io_error_resiliency_params ...[2024-07-10 13:29:27.754161] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:48.619 [2024-07-10 13:29:27.754234] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:48.619 [2024-07-10 13:29:27.754278] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:48.619 [2024-07-10 13:29:27.754320] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:48.619 [2024-07-10 13:29:27.754358] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:48.619 [2024-07-10 13:29:27.754413] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:48.619 [2024-07-10 13:29:27.754446] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:48.619 [2024-07-10 13:29:27.754504] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:48.619 [2024-07-10 13:29:27.754547] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:48.619 passed 00:05:48.619 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:48.619 Test: test_reconnect_ctrlr ...[2024-07-10 13:29:27.755255] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.755373] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.755596] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.755700] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.755814] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 passed 00:05:48.619 Test: test_retry_failover_ctrlr ...[2024-07-10 13:29:27.756199] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 passed 00:05:48.619 Test: test_fail_path ...[2024-07-10 13:29:27.756679] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.619 [2024-07-10 13:29:27.756797] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 [2024-07-10 13:29:27.756930] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 [2024-07-10 13:29:27.757024] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 [2024-07-10 13:29:27.757173] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 passed 00:05:48.620 Test: test_nvme_ns_cmp ...passed 00:05:48.620 Test: test_ana_transition ...passed 00:05:48.620 Test: test_set_preferred_path ...passed 00:05:48.620 Test: test_find_next_io_path ...passed 00:05:48.620 Test: test_find_io_path_min_qd ...passed 00:05:48.620 Test: test_disable_auto_failback ...[2024-07-10 13:29:27.758602] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 passed 00:05:48.620 Test: test_set_multipath_policy ...passed 00:05:48.620 Test: test_uuid_generation ...passed 00:05:48.620 Test: test_retry_io_to_same_path ...passed 00:05:48.620 Test: test_race_between_reset_and_disconnected ...passed 00:05:48.620 Test: test_ctrlr_op_rpc ...passed 00:05:48.620 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:48.620 Test: test_disable_enable_ctrlr ...[2024-07-10 13:29:27.761703] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 [2024-07-10 13:29:27.761842] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:48.620 passed 00:05:48.620 Test: test_delete_ctrlr_done ...passed 00:05:48.620 Test: test_ns_remove_during_reset ...passed 00:05:48.620 00:05:48.620 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.620 suites 1 1 n/a 0 0 00:05:48.620 tests 48 48 48 0 0 00:05:48.620 asserts 3553 3553 3553 0 n/a 00:05:48.620 00:05:48.620 Elapsed time = 0.025 seconds 00:05:48.620 13:29:27 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:48.620 Test Options 00:05:48.620 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:48.620 00:05:48.620 00:05:48.620 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.620 http://cunit.sourceforge.net/ 00:05:48.620 00:05:48.620 00:05:48.620 Suite: raid 00:05:48.620 Test: test_create_raid ...passed 00:05:48.620 Test: test_create_raid_superblock ...passed 00:05:48.620 Test: test_delete_raid ...passed 00:05:48.620 Test: test_create_raid_invalid_args ...[2024-07-10 13:29:27.816910] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:48.620 [2024-07-10 13:29:27.817476] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:48.620 [2024-07-10 13:29:27.818085] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:48.620 [2024-07-10 13:29:27.818424] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:48.620 [2024-07-10 13:29:27.819473] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:48.620 passed 00:05:48.620 Test: test_delete_raid_invalid_args ...passed 00:05:48.620 Test: test_io_channel ...passed 00:05:48.620 Test: test_reset_io ...passed 00:05:48.620 Test: test_write_io ...passed 00:05:48.620 Test: test_read_io ...passed 00:05:49.188 Test: test_unmap_io ...passed 00:05:49.188 Test: test_io_failure ...[2024-07-10 13:29:28.545048] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:49.188 passed 00:05:49.188 Test: test_multi_raid_no_io ...passed 00:05:49.188 Test: test_multi_raid_with_io ...passed 00:05:49.188 Test: test_io_type_supported ...passed 00:05:49.188 Test: test_raid_json_dump_info ...passed 00:05:49.188 Test: test_context_size ...passed 00:05:49.188 Test: test_raid_level_conversions ...passed 00:05:49.448 Test: test_raid_process ...passed 00:05:49.448 Test: test_raid_io_split ...passed 00:05:49.448 00:05:49.448 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.448 suites 1 1 n/a 0 0 00:05:49.448 tests 19 19 19 0 0 00:05:49.448 asserts 177879 177879 177879 0 n/a 00:05:49.448 00:05:49.448 Elapsed time = 0.740 seconds 00:05:49.448 13:29:28 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:49.448 00:05:49.448 00:05:49.448 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.448 http://cunit.sourceforge.net/ 00:05:49.448 00:05:49.448 00:05:49.448 Suite: raid_sb 00:05:49.448 Test: test_raid_bdev_write_superblock ...passed 00:05:49.448 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:49.448 Test: test_raid_bdev_parse_superblock ...[2024-07-10 13:29:28.605520] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:49.448 passed 00:05:49.448 00:05:49.448 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.448 suites 1 1 n/a 0 0 00:05:49.448 tests 3 3 3 0 0 00:05:49.448 asserts 32 32 32 0 n/a 00:05:49.448 00:05:49.448 Elapsed time = 0.001 seconds 00:05:49.448 13:29:28 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:49.448 00:05:49.448 00:05:49.448 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.448 http://cunit.sourceforge.net/ 00:05:49.448 00:05:49.448 00:05:49.448 Suite: concat 00:05:49.448 Test: test_concat_start ...passed 00:05:49.448 Test: test_concat_rw ...passed 00:05:49.448 Test: test_concat_null_payload ...passed 00:05:49.448 00:05:49.448 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.448 suites 1 1 n/a 0 0 00:05:49.448 tests 3 3 3 0 0 00:05:49.448 asserts 8097 8097 8097 0 n/a 00:05:49.448 00:05:49.448 Elapsed time = 0.008 seconds 00:05:49.448 13:29:28 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:49.448 00:05:49.448 00:05:49.448 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.448 http://cunit.sourceforge.net/ 00:05:49.448 00:05:49.448 00:05:49.448 Suite: raid1 00:05:49.448 Test: test_raid1_start ...passed 00:05:49.448 Test: test_raid1_read_balancing ...passed 00:05:49.448 00:05:49.448 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.448 suites 1 1 n/a 0 0 00:05:49.448 tests 2 2 2 0 0 00:05:49.448 asserts 2856 2856 2856 0 n/a 00:05:49.448 00:05:49.448 Elapsed time = 0.004 seconds 00:05:49.448 13:29:28 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:49.448 00:05:49.448 00:05:49.448 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.448 http://cunit.sourceforge.net/ 00:05:49.448 00:05:49.448 00:05:49.448 Suite: zone 00:05:49.448 Test: test_zone_get_operation ...passed 00:05:49.448 Test: test_bdev_zone_get_info ...passed 00:05:49.448 Test: test_bdev_zone_management ...passed 00:05:49.448 Test: test_bdev_zone_append ...passed 00:05:49.448 Test: test_bdev_zone_append_with_md ...passed 00:05:49.448 Test: test_bdev_zone_appendv ...passed 00:05:49.448 Test: test_bdev_zone_appendv_with_md ...passed 00:05:49.448 Test: test_bdev_io_get_append_location ...passed 00:05:49.448 00:05:49.448 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.448 suites 1 1 n/a 0 0 00:05:49.448 tests 8 8 8 0 0 00:05:49.448 asserts 94 94 94 0 n/a 00:05:49.448 00:05:49.448 Elapsed time = 0.001 seconds 00:05:49.448 13:29:28 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:49.448 00:05:49.448 00:05:49.448 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.448 http://cunit.sourceforge.net/ 00:05:49.448 00:05:49.448 00:05:49.448 Suite: gpt_parse 00:05:49.448 Test: test_parse_mbr_and_primary ...[2024-07-10 13:29:28.787212] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:49.448 [2024-07-10 13:29:28.787676] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:49.448 [2024-07-10 13:29:28.787753] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:49.448 [2024-07-10 13:29:28.787865] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:49.448 [2024-07-10 13:29:28.787929] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:49.448 [2024-07-10 13:29:28.788036] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:49.448 passed 00:05:49.448 Test: test_parse_secondary ...[2024-07-10 13:29:28.789361] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:49.448 [2024-07-10 13:29:28.789454] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:49.448 [2024-07-10 13:29:28.789506] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:49.448 [2024-07-10 13:29:28.789547] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:49.448 passed 00:05:49.448 Test: test_check_mbr ...[2024-07-10 13:29:28.790827] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:49.448 passed 00:05:49.448 Test: test_read_header ...[2024-07-10 13:29:28.790906] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:49.448 [2024-07-10 13:29:28.790994] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:49.448 [2024-07-10 13:29:28.791126] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:49.448 [2024-07-10 13:29:28.791237] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:49.448 [2024-07-10 13:29:28.791306] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:49.448 [2024-07-10 13:29:28.791357] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:49.449 [2024-07-10 13:29:28.791403] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:49.449 passed 00:05:49.449 Test: test_read_partitions ...[2024-07-10 13:29:28.791495] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:49.449 [2024-07-10 13:29:28.791540] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:49.449 [2024-07-10 13:29:28.791571] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:49.449 [2024-07-10 13:29:28.791591] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:49.449 [2024-07-10 13:29:28.792017] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:49.449 passed 00:05:49.449 00:05:49.449 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.449 suites 1 1 n/a 0 0 00:05:49.449 tests 5 5 5 0 0 00:05:49.449 asserts 33 33 33 0 n/a 00:05:49.449 00:05:49.449 Elapsed time = 0.006 seconds 00:05:49.709 13:29:28 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:49.709 00:05:49.709 00:05:49.709 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.709 http://cunit.sourceforge.net/ 00:05:49.709 00:05:49.709 00:05:49.709 Suite: bdev_part 00:05:49.709 Test: part_test ...[2024-07-10 13:29:28.845390] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:49.709 passed 00:05:49.709 Test: part_free_test ...passed 00:05:49.709 Test: part_get_io_channel_test ...passed 00:05:49.709 Test: part_construct_ext ...passed 00:05:49.709 00:05:49.709 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.709 suites 1 1 n/a 0 0 00:05:49.709 tests 4 4 4 0 0 00:05:49.709 asserts 48 48 48 0 n/a 00:05:49.709 00:05:49.709 Elapsed time = 0.038 seconds 00:05:49.709 13:29:28 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:49.709 00:05:49.709 00:05:49.709 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.709 http://cunit.sourceforge.net/ 00:05:49.709 00:05:49.709 00:05:49.709 Suite: scsi_nvme_suite 00:05:49.709 Test: scsi_nvme_translate_test ...passed 00:05:49.709 00:05:49.709 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.709 suites 1 1 n/a 0 0 00:05:49.709 tests 1 1 1 0 0 00:05:49.709 asserts 104 104 104 0 n/a 00:05:49.709 00:05:49.709 Elapsed time = 0.000 seconds 00:05:49.709 13:29:28 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:49.709 00:05:49.709 00:05:49.709 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.709 http://cunit.sourceforge.net/ 00:05:49.709 00:05:49.709 00:05:49.709 Suite: lvol 00:05:49.709 Test: ut_lvs_init ...[2024-07-10 13:29:28.982037] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:49.709 [2024-07-10 13:29:28.982716] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:49.709 passed 00:05:49.709 Test: ut_lvol_init ...passed 00:05:49.709 Test: ut_lvol_snapshot ...passed 00:05:49.709 Test: ut_lvol_clone ...passed 00:05:49.709 Test: ut_lvs_destroy ...passed 00:05:49.709 Test: ut_lvs_unload ...passed 00:05:49.709 Test: ut_lvol_resize ...[2024-07-10 13:29:28.984781] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:49.709 passed 00:05:49.709 Test: ut_lvol_set_read_only ...passed 00:05:49.709 Test: ut_lvol_hotremove ...passed 00:05:49.709 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:49.709 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:49.709 Test: ut_lvol_read_write ...passed 00:05:49.709 Test: ut_vbdev_lvol_submit_request ...passed 00:05:49.709 Test: ut_lvol_examine_config ...passed 00:05:49.709 Test: ut_lvol_examine_disk ...[2024-07-10 13:29:28.985912] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:49.709 passed 00:05:49.710 Test: ut_lvol_rename ...[2024-07-10 13:29:28.987137] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:49.710 [2024-07-10 13:29:28.987270] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:49.710 passed 00:05:49.710 Test: ut_bdev_finish ...passed 00:05:49.710 Test: ut_lvs_rename ...passed 00:05:49.710 Test: ut_lvol_seek ...passed 00:05:49.710 Test: ut_esnap_dev_create ...[2024-07-10 13:29:28.988315] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:49.710 [2024-07-10 13:29:28.988425] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:49.710 [2024-07-10 13:29:28.988482] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:49.710 [2024-07-10 13:29:28.988557] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:49.710 passed 00:05:49.710 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-10 13:29:28.988793] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:49.710 [2024-07-10 13:29:28.988856] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:49.710 passed 00:05:49.710 00:05:49.710 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.710 suites 1 1 n/a 0 0 00:05:49.710 tests 21 21 21 0 0 00:05:49.710 asserts 712 712 712 0 n/a 00:05:49.710 00:05:49.710 Elapsed time = 0.006 seconds 00:05:49.710 13:29:29 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:49.710 00:05:49.710 00:05:49.710 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.710 http://cunit.sourceforge.net/ 00:05:49.710 00:05:49.710 00:05:49.710 Suite: zone_block 00:05:49.710 Test: test_zone_block_create ...passed 00:05:49.710 Test: test_zone_block_create_invalid ...[2024-07-10 13:29:29.042185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:49.710 [2024-07-10 13:29:29.042435] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-10 13:29:29.042555] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:49.710 [2024-07-10 13:29:29.042604] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-10 13:29:29.042707] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:49.710 [2024-07-10 13:29:29.042732] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-10 13:29:29.042784] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:49.710 [2024-07-10 13:29:29.042812] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:49.710 Test: test_get_zone_info ...[2024-07-10 13:29:29.043157] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.043205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.043252] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_supported_io_types ...passed 00:05:49.710 Test: test_reset_zone ...[2024-07-10 13:29:29.043846] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.043881] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_open_zone ...[2024-07-10 13:29:29.044181] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.044656] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.044696] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_zone_write ...[2024-07-10 13:29:29.044993] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:49.710 [2024-07-10 13:29:29.045029] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.045067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:49.710 [2024-07-10 13:29:29.045091] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.048816] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:49.710 [2024-07-10 13:29:29.048856] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.048895] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:49.710 [2024-07-10 13:29:29.048909] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.052712] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:49.710 [2024-07-10 13:29:29.052761] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_zone_read ...[2024-07-10 13:29:29.053063] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:49.710 [2024-07-10 13:29:29.053086] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.053128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:49.710 [2024-07-10 13:29:29.053145] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.053452] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:49.710 [2024-07-10 13:29:29.053473] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_close_zone ...[2024-07-10 13:29:29.053705] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.053757] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.053921] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.053961] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_finish_zone ...[2024-07-10 13:29:29.054364] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.054405] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 Test: test_append_zone ...[2024-07-10 13:29:29.054660] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:49.710 [2024-07-10 13:29:29.054691] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.054723] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:49.710 [2024-07-10 13:29:29.054736] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 [2024-07-10 13:29:29.061732] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:49.710 [2024-07-10 13:29:29.061773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:49.710 passed 00:05:49.710 00:05:49.710 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.710 suites 1 1 n/a 0 0 00:05:49.710 tests 11 11 11 0 0 00:05:49.710 asserts 3437 3437 3437 0 n/a 00:05:49.710 00:05:49.710 Elapsed time = 0.020 seconds 00:05:49.969 13:29:29 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:49.969 00:05:49.969 00:05:49.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.969 http://cunit.sourceforge.net/ 00:05:49.969 00:05:49.969 00:05:49.969 Suite: bdev 00:05:49.969 Test: basic ...[2024-07-10 13:29:29.168201] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55578fdc8401): Operation not permitted (rc=-1) 00:05:49.969 [2024-07-10 13:29:29.168516] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55578fdc83c0): Operation not permitted (rc=-1) 00:05:49.969 [2024-07-10 13:29:29.168575] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55578fdc8401): Operation not permitted (rc=-1) 00:05:49.969 passed 00:05:49.969 Test: unregister_and_close ...passed 00:05:49.969 Test: unregister_and_close_different_threads ...passed 00:05:49.969 Test: basic_qos ...passed 00:05:50.227 Test: put_channel_during_reset ...passed 00:05:50.227 Test: aborted_reset ...passed 00:05:50.227 Test: aborted_reset_no_outstanding_io ...passed 00:05:50.227 Test: io_during_reset ...passed 00:05:50.227 Test: reset_completions ...passed 00:05:50.227 Test: io_during_qos_queue ...passed 00:05:50.227 Test: io_during_qos_reset ...passed 00:05:50.227 Test: enomem ...passed 00:05:50.486 Test: enomem_multi_bdev ...passed 00:05:50.486 Test: enomem_multi_bdev_unregister ...passed 00:05:50.486 Test: enomem_multi_io_target ...passed 00:05:50.486 Test: qos_dynamic_enable ...passed 00:05:50.486 Test: bdev_histograms_mt ...passed 00:05:50.486 Test: bdev_set_io_timeout_mt ...[2024-07-10 13:29:29.813892] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:50.486 passed 00:05:50.486 Test: lock_lba_range_then_submit_io ...[2024-07-10 13:29:29.831821] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55578fdc8380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:50.746 passed 00:05:50.746 Test: unregister_during_reset ...passed 00:05:50.746 Test: event_notify_and_close ...passed 00:05:50.746 Test: unregister_and_qos_poller ...passed 00:05:50.746 Suite: bdev_wrong_thread 00:05:50.746 Test: spdk_bdev_register_wt ...[2024-07-10 13:29:29.971002] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:50.746 passed 00:05:50.746 Test: spdk_bdev_examine_wt ...[2024-07-10 13:29:29.971341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:50.746 passed 00:05:50.746 00:05:50.746 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.746 suites 2 2 n/a 0 0 00:05:50.746 tests 24 24 24 0 0 00:05:50.746 asserts 621 621 621 0 n/a 00:05:50.746 00:05:50.746 Elapsed time = 0.824 seconds 00:05:50.746 00:05:50.746 real 0m3.700s 00:05:50.746 user 0m1.608s 00:05:50.746 sys 0m2.085s 00:05:50.746 13:29:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.746 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.746 ************************************ 00:05:50.746 END TEST unittest_bdev 00:05:50.746 ************************************ 00:05:50.746 13:29:30 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:50.746 13:29:30 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:50.746 13:29:30 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:50.746 13:29:30 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:50.746 13:29:30 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:50.746 13:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.746 13:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.746 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.746 ************************************ 00:05:50.746 START TEST unittest_bdev_raid5f 00:05:50.746 ************************************ 00:05:50.746 13:29:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:51.005 00:05:51.005 00:05:51.005 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.005 http://cunit.sourceforge.net/ 00:05:51.005 00:05:51.005 00:05:51.005 Suite: raid5f 00:05:51.005 Test: test_raid5f_start ...passed 00:05:51.268 Test: test_raid5f_submit_read_request ...passed 00:05:51.268 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:54.560 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:09.440 Test: test_raid5f_chunk_write_error ...passed 00:06:16.009 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:18.537 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:45.091 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:45.091 00:06:45.091 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.091 suites 1 1 n/a 0 0 00:06:45.091 tests 8 8 8 0 0 00:06:45.091 asserts 351864 351864 351864 0 n/a 00:06:45.091 00:06:45.091 Elapsed time = 50.366 seconds 00:06:45.091 00:06:45.091 real 0m50.462s 00:06:45.091 user 0m48.292s 00:06:45.091 sys 0m2.156s 00:06:45.091 13:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.091 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:45.091 ************************************ 00:06:45.091 END TEST unittest_bdev_raid5f 00:06:45.091 ************************************ 00:06:45.091 13:30:20 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:06:45.091 13:30:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.091 13:30:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.091 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:45.091 ************************************ 00:06:45.091 START TEST unittest_blob_blobfs 00:06:45.091 ************************************ 00:06:45.091 13:30:20 -- common/autotest_common.sh@1104 -- # unittest_blob 00:06:45.091 13:30:20 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:45.091 13:30:20 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:45.091 00:06:45.091 00:06:45.091 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.091 http://cunit.sourceforge.net/ 00:06:45.091 00:06:45.091 00:06:45.091 Suite: blob_nocopy_noextent 00:06:45.091 Test: blob_init ...[2024-07-10 13:30:20.639482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:45.091 passed 00:06:45.091 Test: blob_thin_provision ...passed 00:06:45.091 Test: blob_read_only ...passed 00:06:45.091 Test: bs_load ...[2024-07-10 13:30:20.711359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:45.091 passed 00:06:45.091 Test: bs_load_custom_cluster_size ...passed 00:06:45.091 Test: bs_load_after_failed_grow ...passed 00:06:45.091 Test: bs_cluster_sz ...[2024-07-10 13:30:20.737927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:45.091 [2024-07-10 13:30:20.738235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:45.091 [2024-07-10 13:30:20.738345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:45.091 passed 00:06:45.091 Test: bs_resize_md ...passed 00:06:45.091 Test: bs_destroy ...passed 00:06:45.091 Test: bs_type ...passed 00:06:45.091 Test: bs_super_block ...passed 00:06:45.091 Test: bs_test_recover_cluster_count ...passed 00:06:45.091 Test: bs_grow_live ...passed 00:06:45.091 Test: bs_grow_live_no_space ...passed 00:06:45.091 Test: bs_test_grow ...passed 00:06:45.091 Test: blob_serialize_test ...passed 00:06:45.091 Test: super_block_crc ...passed 00:06:45.091 Test: blob_thin_prov_write_count_io ...passed 00:06:45.091 Test: bs_load_iter_test ...passed 00:06:45.091 Test: blob_relations ...[2024-07-10 13:30:20.887731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.091 [2024-07-10 13:30:20.887848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:20.888605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.091 [2024-07-10 13:30:20.888662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 passed 00:06:45.091 Test: blob_relations2 ...[2024-07-10 13:30:20.901901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.091 [2024-07-10 13:30:20.901978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:20.902028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.091 [2024-07-10 13:30:20.902043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:20.903120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.091 [2024-07-10 13:30:20.903172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:20.903547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.091 [2024-07-10 13:30:20.903596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 passed 00:06:45.091 Test: blob_relations3 ...passed 00:06:45.091 Test: blobstore_clean_power_failure ...passed 00:06:45.091 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:30:21.056458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:45.091 [2024-07-10 13:30:21.068449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:45.091 [2024-07-10 13:30:21.068536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:45.091 [2024-07-10 13:30:21.068586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:21.080398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:45.091 [2024-07-10 13:30:21.080499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:45.091 [2024-07-10 13:30:21.080539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:45.091 [2024-07-10 13:30:21.080572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:21.092360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:45.091 [2024-07-10 13:30:21.092484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:21.104027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:45.091 [2024-07-10 13:30:21.104144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 [2024-07-10 13:30:21.115752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:45.091 [2024-07-10 13:30:21.115869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.091 passed 00:06:45.091 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:30:21.150692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:45.092 [2024-07-10 13:30:21.173545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:45.092 [2024-07-10 13:30:21.185597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:45.092 passed 00:06:45.092 Test: blob_io_unit ...passed 00:06:45.092 Test: blob_io_unit_compatibility ...passed 00:06:45.092 Test: blob_ext_md_pages ...passed 00:06:45.092 Test: blob_esnap_io_4096_4096 ...passed 00:06:45.092 Test: blob_esnap_io_512_512 ...passed 00:06:45.092 Test: blob_esnap_io_4096_512 ...passed 00:06:45.092 Test: blob_esnap_io_512_4096 ...passed 00:06:45.092 Suite: blob_bs_nocopy_noextent 00:06:45.092 Test: blob_open ...passed 00:06:45.092 Test: blob_create ...[2024-07-10 13:30:21.409336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:45.092 passed 00:06:45.092 Test: blob_create_loop ...passed 00:06:45.092 Test: blob_create_fail ...[2024-07-10 13:30:21.500425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:45.092 passed 00:06:45.092 Test: blob_create_internal ...passed 00:06:45.092 Test: blob_create_zero_extent ...passed 00:06:45.092 Test: blob_snapshot ...passed 00:06:45.092 Test: blob_clone ...passed 00:06:45.092 Test: blob_inflate ...[2024-07-10 13:30:21.671082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:45.092 passed 00:06:45.092 Test: blob_delete ...passed 00:06:45.092 Test: blob_resize_test ...[2024-07-10 13:30:21.732964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:45.092 passed 00:06:45.092 Test: channel_ops ...passed 00:06:45.092 Test: blob_super ...passed 00:06:45.092 Test: blob_rw_verify_iov ...passed 00:06:45.092 Test: blob_unmap ...passed 00:06:45.092 Test: blob_iter ...passed 00:06:45.092 Test: blob_parse_md ...passed 00:06:45.092 Test: bs_load_pending_removal ...passed 00:06:45.092 Test: bs_unload ...[2024-07-10 13:30:21.978304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:45.092 passed 00:06:45.092 Test: bs_usable_clusters ...passed 00:06:45.092 Test: blob_crc ...[2024-07-10 13:30:22.040868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:45.092 [2024-07-10 13:30:22.041079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:45.092 passed 00:06:45.092 Test: blob_flags ...passed 00:06:45.092 Test: bs_version ...passed 00:06:45.092 Test: blob_set_xattrs_test ...[2024-07-10 13:30:22.134948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:45.092 [2024-07-10 13:30:22.135104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:45.092 passed 00:06:45.092 Test: blob_thin_prov_alloc ...passed 00:06:45.092 Test: blob_insert_cluster_msg_test ...passed 00:06:45.092 Test: blob_thin_prov_rw ...passed 00:06:45.092 Test: blob_thin_prov_rle ...passed 00:06:45.092 Test: blob_thin_prov_rw_iov ...passed 00:06:45.092 Test: blob_snapshot_rw ...passed 00:06:45.092 Test: blob_snapshot_rw_iov ...passed 00:06:45.092 Test: blob_inflate_rw ...passed 00:06:45.092 Test: blob_snapshot_freeze_io ...passed 00:06:45.092 Test: blob_operation_split_rw ...passed 00:06:45.092 Test: blob_operation_split_rw_iov ...passed 00:06:45.092 Test: blob_simultaneous_operations ...[2024-07-10 13:30:22.975842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:45.092 [2024-07-10 13:30:22.976006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:22.976916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:45.092 [2024-07-10 13:30:22.976996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:22.986236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:45.092 [2024-07-10 13:30:22.986373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:22.986497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:45.092 [2024-07-10 13:30:22.986547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 passed 00:06:45.092 Test: blob_persist_test ...passed 00:06:45.092 Test: blob_decouple_snapshot ...passed 00:06:45.092 Test: blob_seek_io_unit ...passed 00:06:45.092 Test: blob_nested_freezes ...passed 00:06:45.092 Suite: blob_blob_nocopy_noextent 00:06:45.092 Test: blob_write ...passed 00:06:45.092 Test: blob_read ...passed 00:06:45.092 Test: blob_rw_verify ...passed 00:06:45.092 Test: blob_rw_verify_iov_nomem ...passed 00:06:45.092 Test: blob_rw_iov_read_only ...passed 00:06:45.092 Test: blob_xattr ...passed 00:06:45.092 Test: blob_dirty_shutdown ...passed 00:06:45.092 Test: blob_is_degraded ...passed 00:06:45.092 Suite: blob_esnap_bs_nocopy_noextent 00:06:45.092 Test: blob_esnap_create ...passed 00:06:45.092 Test: blob_esnap_thread_add_remove ...passed 00:06:45.092 Test: blob_esnap_clone_snapshot ...passed 00:06:45.092 Test: blob_esnap_clone_inflate ...passed 00:06:45.092 Test: blob_esnap_clone_decouple ...passed 00:06:45.092 Test: blob_esnap_clone_reload ...passed 00:06:45.092 Test: blob_esnap_hotplug ...passed 00:06:45.092 Suite: blob_nocopy_extent 00:06:45.092 Test: blob_init ...[2024-07-10 13:30:23.643618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:45.092 passed 00:06:45.092 Test: blob_thin_provision ...passed 00:06:45.092 Test: blob_read_only ...passed 00:06:45.092 Test: bs_load ...[2024-07-10 13:30:23.687852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:45.092 passed 00:06:45.092 Test: bs_load_custom_cluster_size ...passed 00:06:45.092 Test: bs_load_after_failed_grow ...passed 00:06:45.092 Test: bs_cluster_sz ...[2024-07-10 13:30:23.711942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:45.092 [2024-07-10 13:30:23.712208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:45.092 [2024-07-10 13:30:23.712284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:45.092 passed 00:06:45.092 Test: bs_resize_md ...passed 00:06:45.092 Test: bs_destroy ...passed 00:06:45.092 Test: bs_type ...passed 00:06:45.092 Test: bs_super_block ...passed 00:06:45.092 Test: bs_test_recover_cluster_count ...passed 00:06:45.092 Test: bs_grow_live ...passed 00:06:45.092 Test: bs_grow_live_no_space ...passed 00:06:45.092 Test: bs_test_grow ...passed 00:06:45.092 Test: blob_serialize_test ...passed 00:06:45.092 Test: super_block_crc ...passed 00:06:45.092 Test: blob_thin_prov_write_count_io ...passed 00:06:45.092 Test: bs_load_iter_test ...passed 00:06:45.092 Test: blob_relations ...[2024-07-10 13:30:23.852791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.092 [2024-07-10 13:30:23.852938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:23.853673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.092 [2024-07-10 13:30:23.853757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 passed 00:06:45.092 Test: blob_relations2 ...[2024-07-10 13:30:23.866233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.092 [2024-07-10 13:30:23.866347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:23.866385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.092 [2024-07-10 13:30:23.866422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:23.867531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.092 [2024-07-10 13:30:23.867612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:23.867943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:45.092 [2024-07-10 13:30:23.868019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 passed 00:06:45.092 Test: blob_relations3 ...passed 00:06:45.092 Test: blobstore_clean_power_failure ...passed 00:06:45.092 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:30:24.013201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:45.092 [2024-07-10 13:30:24.024615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:45.092 [2024-07-10 13:30:24.036145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:45.092 [2024-07-10 13:30:24.036275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:45.092 [2024-07-10 13:30:24.036316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:24.047674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:45.092 [2024-07-10 13:30:24.047805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:45.092 [2024-07-10 13:30:24.047846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:45.092 [2024-07-10 13:30:24.047892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:24.059401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:45.092 [2024-07-10 13:30:24.059546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:45.092 [2024-07-10 13:30:24.059583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:45.092 [2024-07-10 13:30:24.059637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:24.071335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:45.092 [2024-07-10 13:30:24.071516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.092 [2024-07-10 13:30:24.083162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:45.093 [2024-07-10 13:30:24.083345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.093 [2024-07-10 13:30:24.094988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:45.093 [2024-07-10 13:30:24.095144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:45.093 passed 00:06:45.093 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:30:24.129577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:45.093 [2024-07-10 13:30:24.140811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:45.093 [2024-07-10 13:30:24.163046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:45.093 [2024-07-10 13:30:24.174673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:45.093 passed 00:06:45.093 Test: blob_io_unit ...passed 00:06:45.093 Test: blob_io_unit_compatibility ...passed 00:06:45.093 Test: blob_ext_md_pages ...passed 00:06:45.093 Test: blob_esnap_io_4096_4096 ...passed 00:06:45.093 Test: blob_esnap_io_512_512 ...passed 00:06:45.093 Test: blob_esnap_io_4096_512 ...passed 00:06:45.093 Test: blob_esnap_io_512_4096 ...passed 00:06:45.093 Suite: blob_bs_nocopy_extent 00:06:45.093 Test: blob_open ...passed 00:06:45.093 Test: blob_create ...[2024-07-10 13:30:24.400500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:45.093 passed 00:06:45.352 Test: blob_create_loop ...passed 00:06:45.352 Test: blob_create_fail ...[2024-07-10 13:30:24.492505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:45.352 passed 00:06:45.352 Test: blob_create_internal ...passed 00:06:45.352 Test: blob_create_zero_extent ...passed 00:06:45.352 Test: blob_snapshot ...passed 00:06:45.352 Test: blob_clone ...passed 00:06:45.352 Test: blob_inflate ...[2024-07-10 13:30:24.661915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:45.352 passed 00:06:45.352 Test: blob_delete ...passed 00:06:45.612 Test: blob_resize_test ...[2024-07-10 13:30:24.725797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:45.612 passed 00:06:45.612 Test: channel_ops ...passed 00:06:45.612 Test: blob_super ...passed 00:06:45.612 Test: blob_rw_verify_iov ...passed 00:06:45.612 Test: blob_unmap ...passed 00:06:45.612 Test: blob_iter ...passed 00:06:45.612 Test: blob_parse_md ...passed 00:06:45.612 Test: bs_load_pending_removal ...passed 00:06:45.871 Test: bs_unload ...[2024-07-10 13:30:24.976395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:45.871 passed 00:06:45.871 Test: bs_usable_clusters ...passed 00:06:45.871 Test: blob_crc ...[2024-07-10 13:30:25.039121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:45.871 [2024-07-10 13:30:25.039316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:45.871 passed 00:06:45.871 Test: blob_flags ...passed 00:06:45.871 Test: bs_version ...passed 00:06:45.871 Test: blob_set_xattrs_test ...[2024-07-10 13:30:25.134661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:45.871 [2024-07-10 13:30:25.134824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:45.871 passed 00:06:46.131 Test: blob_thin_prov_alloc ...passed 00:06:46.131 Test: blob_insert_cluster_msg_test ...passed 00:06:46.131 Test: blob_thin_prov_rw ...passed 00:06:46.131 Test: blob_thin_prov_rle ...passed 00:06:46.131 Test: blob_thin_prov_rw_iov ...passed 00:06:46.131 Test: blob_snapshot_rw ...passed 00:06:46.131 Test: blob_snapshot_rw_iov ...passed 00:06:46.391 Test: blob_inflate_rw ...passed 00:06:46.391 Test: blob_snapshot_freeze_io ...passed 00:06:46.651 Test: blob_operation_split_rw ...passed 00:06:46.651 Test: blob_operation_split_rw_iov ...passed 00:06:46.651 Test: blob_simultaneous_operations ...[2024-07-10 13:30:25.955767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:46.651 [2024-07-10 13:30:25.955935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:46.651 [2024-07-10 13:30:25.956808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:46.651 [2024-07-10 13:30:25.956888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:46.651 [2024-07-10 13:30:25.965822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:46.651 [2024-07-10 13:30:25.965927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:46.651 [2024-07-10 13:30:25.966063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:46.651 [2024-07-10 13:30:25.966133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:46.651 passed 00:06:46.911 Test: blob_persist_test ...passed 00:06:46.911 Test: blob_decouple_snapshot ...passed 00:06:46.911 Test: blob_seek_io_unit ...passed 00:06:46.911 Test: blob_nested_freezes ...passed 00:06:46.911 Suite: blob_blob_nocopy_extent 00:06:46.911 Test: blob_write ...passed 00:06:46.911 Test: blob_read ...passed 00:06:46.911 Test: blob_rw_verify ...passed 00:06:46.911 Test: blob_rw_verify_iov_nomem ...passed 00:06:47.170 Test: blob_rw_iov_read_only ...passed 00:06:47.170 Test: blob_xattr ...passed 00:06:47.170 Test: blob_dirty_shutdown ...passed 00:06:47.170 Test: blob_is_degraded ...passed 00:06:47.170 Suite: blob_esnap_bs_nocopy_extent 00:06:47.170 Test: blob_esnap_create ...passed 00:06:47.170 Test: blob_esnap_thread_add_remove ...passed 00:06:47.170 Test: blob_esnap_clone_snapshot ...passed 00:06:47.170 Test: blob_esnap_clone_inflate ...passed 00:06:47.430 Test: blob_esnap_clone_decouple ...passed 00:06:47.430 Test: blob_esnap_clone_reload ...passed 00:06:47.430 Test: blob_esnap_hotplug ...passed 00:06:47.430 Suite: blob_copy_noextent 00:06:47.430 Test: blob_init ...[2024-07-10 13:30:26.621773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:47.430 passed 00:06:47.430 Test: blob_thin_provision ...passed 00:06:47.430 Test: blob_read_only ...passed 00:06:47.430 Test: bs_load ...[2024-07-10 13:30:26.665219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:47.430 passed 00:06:47.430 Test: bs_load_custom_cluster_size ...passed 00:06:47.430 Test: bs_load_after_failed_grow ...passed 00:06:47.430 Test: bs_cluster_sz ...[2024-07-10 13:30:26.688348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:47.430 [2024-07-10 13:30:26.688547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:47.430 [2024-07-10 13:30:26.688600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:47.430 passed 00:06:47.430 Test: bs_resize_md ...passed 00:06:47.430 Test: bs_destroy ...passed 00:06:47.430 Test: bs_type ...passed 00:06:47.430 Test: bs_super_block ...passed 00:06:47.430 Test: bs_test_recover_cluster_count ...passed 00:06:47.430 Test: bs_grow_live ...passed 00:06:47.430 Test: bs_grow_live_no_space ...passed 00:06:47.430 Test: bs_test_grow ...passed 00:06:47.430 Test: blob_serialize_test ...passed 00:06:47.430 Test: super_block_crc ...passed 00:06:47.689 Test: blob_thin_prov_write_count_io ...passed 00:06:47.689 Test: bs_load_iter_test ...passed 00:06:47.689 Test: blob_relations ...[2024-07-10 13:30:26.832619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:47.689 [2024-07-10 13:30:26.832780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.689 [2024-07-10 13:30:26.833295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:47.690 [2024-07-10 13:30:26.833352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 passed 00:06:47.690 Test: blob_relations2 ...[2024-07-10 13:30:26.846095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:47.690 [2024-07-10 13:30:26.846218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:26.846252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:47.690 [2024-07-10 13:30:26.846278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:26.847026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:47.690 [2024-07-10 13:30:26.847097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:26.847374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:47.690 [2024-07-10 13:30:26.847433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 passed 00:06:47.690 Test: blob_relations3 ...passed 00:06:47.690 Test: blobstore_clean_power_failure ...passed 00:06:47.690 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:30:26.992818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:47.690 [2024-07-10 13:30:27.003989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:47.690 [2024-07-10 13:30:27.004146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:47.690 [2024-07-10 13:30:27.004181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:27.015228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:47.690 [2024-07-10 13:30:27.015353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:47.690 [2024-07-10 13:30:27.015391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:47.690 [2024-07-10 13:30:27.015427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:27.026523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:47.690 [2024-07-10 13:30:27.026690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:27.037756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:47.690 [2024-07-10 13:30:27.037913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.690 [2024-07-10 13:30:27.049021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:47.690 [2024-07-10 13:30:27.049165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:47.949 passed 00:06:47.949 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:30:27.082179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:47.949 [2024-07-10 13:30:27.103713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:47.949 [2024-07-10 13:30:27.114922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:47.949 passed 00:06:47.949 Test: blob_io_unit ...passed 00:06:47.949 Test: blob_io_unit_compatibility ...passed 00:06:47.949 Test: blob_ext_md_pages ...passed 00:06:47.949 Test: blob_esnap_io_4096_4096 ...passed 00:06:47.949 Test: blob_esnap_io_512_512 ...passed 00:06:47.949 Test: blob_esnap_io_4096_512 ...passed 00:06:47.949 Test: blob_esnap_io_512_4096 ...passed 00:06:47.949 Suite: blob_bs_copy_noextent 00:06:48.209 Test: blob_open ...passed 00:06:48.209 Test: blob_create ...[2024-07-10 13:30:27.339989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:48.209 passed 00:06:48.209 Test: blob_create_loop ...passed 00:06:48.209 Test: blob_create_fail ...[2024-07-10 13:30:27.424730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:48.209 passed 00:06:48.209 Test: blob_create_internal ...passed 00:06:48.209 Test: blob_create_zero_extent ...passed 00:06:48.209 Test: blob_snapshot ...passed 00:06:48.209 Test: blob_clone ...passed 00:06:48.469 Test: blob_inflate ...[2024-07-10 13:30:27.588655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:48.469 passed 00:06:48.469 Test: blob_delete ...passed 00:06:48.469 Test: blob_resize_test ...[2024-07-10 13:30:27.651944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:48.469 passed 00:06:48.469 Test: channel_ops ...passed 00:06:48.469 Test: blob_super ...passed 00:06:48.469 Test: blob_rw_verify_iov ...passed 00:06:48.469 Test: blob_unmap ...passed 00:06:48.469 Test: blob_iter ...passed 00:06:48.731 Test: blob_parse_md ...passed 00:06:48.731 Test: bs_load_pending_removal ...passed 00:06:48.731 Test: bs_unload ...[2024-07-10 13:30:27.909168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:48.731 passed 00:06:48.731 Test: bs_usable_clusters ...passed 00:06:48.731 Test: blob_crc ...[2024-07-10 13:30:27.972491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:48.731 [2024-07-10 13:30:27.972650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:48.731 passed 00:06:48.731 Test: blob_flags ...passed 00:06:48.731 Test: bs_version ...passed 00:06:48.731 Test: blob_set_xattrs_test ...[2024-07-10 13:30:28.067629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:48.731 [2024-07-10 13:30:28.067791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:48.731 passed 00:06:48.999 Test: blob_thin_prov_alloc ...passed 00:06:48.999 Test: blob_insert_cluster_msg_test ...passed 00:06:48.999 Test: blob_thin_prov_rw ...passed 00:06:48.999 Test: blob_thin_prov_rle ...passed 00:06:48.999 Test: blob_thin_prov_rw_iov ...passed 00:06:48.999 Test: blob_snapshot_rw ...passed 00:06:49.259 Test: blob_snapshot_rw_iov ...passed 00:06:49.259 Test: blob_inflate_rw ...passed 00:06:49.259 Test: blob_snapshot_freeze_io ...passed 00:06:49.519 Test: blob_operation_split_rw ...passed 00:06:49.779 Test: blob_operation_split_rw_iov ...passed 00:06:49.779 Test: blob_simultaneous_operations ...[2024-07-10 13:30:28.949574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:49.779 [2024-07-10 13:30:28.949727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:49.779 [2024-07-10 13:30:28.950177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:49.779 [2024-07-10 13:30:28.950245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:49.779 [2024-07-10 13:30:28.952877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:49.779 [2024-07-10 13:30:28.952946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:49.779 [2024-07-10 13:30:28.953033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:49.779 [2024-07-10 13:30:28.953066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:49.779 passed 00:06:49.779 Test: blob_persist_test ...passed 00:06:49.779 Test: blob_decouple_snapshot ...passed 00:06:49.779 Test: blob_seek_io_unit ...passed 00:06:49.779 Test: blob_nested_freezes ...passed 00:06:49.779 Suite: blob_blob_copy_noextent 00:06:49.779 Test: blob_write ...passed 00:06:50.039 Test: blob_read ...passed 00:06:50.039 Test: blob_rw_verify ...passed 00:06:50.039 Test: blob_rw_verify_iov_nomem ...passed 00:06:50.039 Test: blob_rw_iov_read_only ...passed 00:06:50.039 Test: blob_xattr ...passed 00:06:50.039 Test: blob_dirty_shutdown ...passed 00:06:50.039 Test: blob_is_degraded ...passed 00:06:50.039 Suite: blob_esnap_bs_copy_noextent 00:06:50.039 Test: blob_esnap_create ...passed 00:06:50.297 Test: blob_esnap_thread_add_remove ...passed 00:06:50.297 Test: blob_esnap_clone_snapshot ...passed 00:06:50.297 Test: blob_esnap_clone_inflate ...passed 00:06:50.297 Test: blob_esnap_clone_decouple ...passed 00:06:50.297 Test: blob_esnap_clone_reload ...passed 00:06:50.297 Test: blob_esnap_hotplug ...passed 00:06:50.297 Suite: blob_copy_extent 00:06:50.297 Test: blob_init ...[2024-07-10 13:30:29.582241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:50.297 passed 00:06:50.297 Test: blob_thin_provision ...passed 00:06:50.297 Test: blob_read_only ...passed 00:06:50.297 Test: bs_load ...[2024-07-10 13:30:29.623996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:50.297 passed 00:06:50.297 Test: bs_load_custom_cluster_size ...passed 00:06:50.297 Test: bs_load_after_failed_grow ...passed 00:06:50.297 Test: bs_cluster_sz ...[2024-07-10 13:30:29.647411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:50.297 [2024-07-10 13:30:29.647603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:50.297 [2024-07-10 13:30:29.647660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:50.297 passed 00:06:50.557 Test: bs_resize_md ...passed 00:06:50.557 Test: bs_destroy ...passed 00:06:50.557 Test: bs_type ...passed 00:06:50.557 Test: bs_super_block ...passed 00:06:50.557 Test: bs_test_recover_cluster_count ...passed 00:06:50.557 Test: bs_grow_live ...passed 00:06:50.557 Test: bs_grow_live_no_space ...passed 00:06:50.557 Test: bs_test_grow ...passed 00:06:50.557 Test: blob_serialize_test ...passed 00:06:50.557 Test: super_block_crc ...passed 00:06:50.557 Test: blob_thin_prov_write_count_io ...passed 00:06:50.557 Test: bs_load_iter_test ...passed 00:06:50.557 Test: blob_relations ...[2024-07-10 13:30:29.784340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:50.557 [2024-07-10 13:30:29.784483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.557 [2024-07-10 13:30:29.785173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:50.557 [2024-07-10 13:30:29.785246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.557 passed 00:06:50.557 Test: blob_relations2 ...[2024-07-10 13:30:29.797897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:50.557 [2024-07-10 13:30:29.798034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.557 [2024-07-10 13:30:29.798098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:50.557 [2024-07-10 13:30:29.798137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.557 [2024-07-10 13:30:29.799208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:50.557 [2024-07-10 13:30:29.799288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.557 [2024-07-10 13:30:29.799639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:50.557 [2024-07-10 13:30:29.799711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.557 passed 00:06:50.557 Test: blob_relations3 ...passed 00:06:50.818 Test: blobstore_clean_power_failure ...passed 00:06:50.818 Test: blob_delete_snapshot_power_failure ...[2024-07-10 13:30:29.946221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:50.818 [2024-07-10 13:30:29.957733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:50.818 [2024-07-10 13:30:29.969128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:50.818 [2024-07-10 13:30:29.969274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:50.818 [2024-07-10 13:30:29.969314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.818 [2024-07-10 13:30:29.982941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:50.818 [2024-07-10 13:30:29.983079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:50.818 [2024-07-10 13:30:29.983132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:50.818 [2024-07-10 13:30:29.983167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.818 [2024-07-10 13:30:29.994743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:50.818 [2024-07-10 13:30:29.994884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:50.818 [2024-07-10 13:30:29.994914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:50.818 [2024-07-10 13:30:29.994950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.818 [2024-07-10 13:30:30.006327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:50.818 [2024-07-10 13:30:30.006483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.818 [2024-07-10 13:30:30.018125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:50.818 [2024-07-10 13:30:30.018290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.818 [2024-07-10 13:30:30.029782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:50.818 [2024-07-10 13:30:30.029932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:50.818 passed 00:06:50.818 Test: blob_create_snapshot_power_failure ...[2024-07-10 13:30:30.063498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:50.818 [2024-07-10 13:30:30.074542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:50.818 [2024-07-10 13:30:30.096705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:50.818 [2024-07-10 13:30:30.107990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:50.818 passed 00:06:50.818 Test: blob_io_unit ...passed 00:06:50.818 Test: blob_io_unit_compatibility ...passed 00:06:50.818 Test: blob_ext_md_pages ...passed 00:06:51.077 Test: blob_esnap_io_4096_4096 ...passed 00:06:51.077 Test: blob_esnap_io_512_512 ...passed 00:06:51.077 Test: blob_esnap_io_4096_512 ...passed 00:06:51.077 Test: blob_esnap_io_512_4096 ...passed 00:06:51.077 Suite: blob_bs_copy_extent 00:06:51.078 Test: blob_open ...passed 00:06:51.078 Test: blob_create ...[2024-07-10 13:30:30.328075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:51.078 passed 00:06:51.078 Test: blob_create_loop ...passed 00:06:51.078 Test: blob_create_fail ...[2024-07-10 13:30:30.415397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:51.078 passed 00:06:51.337 Test: blob_create_internal ...passed 00:06:51.337 Test: blob_create_zero_extent ...passed 00:06:51.337 Test: blob_snapshot ...passed 00:06:51.337 Test: blob_clone ...passed 00:06:51.337 Test: blob_inflate ...[2024-07-10 13:30:30.574680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:51.337 passed 00:06:51.337 Test: blob_delete ...passed 00:06:51.337 Test: blob_resize_test ...[2024-07-10 13:30:30.635081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:51.337 passed 00:06:51.337 Test: channel_ops ...passed 00:06:51.598 Test: blob_super ...passed 00:06:51.598 Test: blob_rw_verify_iov ...passed 00:06:51.598 Test: blob_unmap ...passed 00:06:51.598 Test: blob_iter ...passed 00:06:51.598 Test: blob_parse_md ...passed 00:06:51.598 Test: bs_load_pending_removal ...passed 00:06:51.598 Test: bs_unload ...[2024-07-10 13:30:30.879546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:51.598 passed 00:06:51.598 Test: bs_usable_clusters ...passed 00:06:51.598 Test: blob_crc ...[2024-07-10 13:30:30.940653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:51.598 [2024-07-10 13:30:30.940822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:51.598 passed 00:06:51.858 Test: blob_flags ...passed 00:06:51.858 Test: bs_version ...passed 00:06:51.858 Test: blob_set_xattrs_test ...[2024-07-10 13:30:31.037816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:51.858 [2024-07-10 13:30:31.037975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:51.858 passed 00:06:51.858 Test: blob_thin_prov_alloc ...passed 00:06:51.858 Test: blob_insert_cluster_msg_test ...passed 00:06:51.858 Test: blob_thin_prov_rw ...passed 00:06:52.117 Test: blob_thin_prov_rle ...passed 00:06:52.117 Test: blob_thin_prov_rw_iov ...passed 00:06:52.117 Test: blob_snapshot_rw ...passed 00:06:52.117 Test: blob_snapshot_rw_iov ...passed 00:06:52.377 Test: blob_inflate_rw ...passed 00:06:52.377 Test: blob_snapshot_freeze_io ...passed 00:06:52.377 Test: blob_operation_split_rw ...passed 00:06:52.636 Test: blob_operation_split_rw_iov ...passed 00:06:52.636 Test: blob_simultaneous_operations ...[2024-07-10 13:30:31.845504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:52.636 [2024-07-10 13:30:31.845711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:52.636 [2024-07-10 13:30:31.846255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:52.636 [2024-07-10 13:30:31.846364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:52.636 [2024-07-10 13:30:31.849391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:52.636 [2024-07-10 13:30:31.849487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:52.636 [2024-07-10 13:30:31.849633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:52.636 [2024-07-10 13:30:31.849711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:52.636 passed 00:06:52.636 Test: blob_persist_test ...passed 00:06:52.636 Test: blob_decouple_snapshot ...passed 00:06:52.636 Test: blob_seek_io_unit ...passed 00:06:52.896 Test: blob_nested_freezes ...passed 00:06:52.896 Suite: blob_blob_copy_extent 00:06:52.896 Test: blob_write ...passed 00:06:52.896 Test: blob_read ...passed 00:06:52.896 Test: blob_rw_verify ...passed 00:06:52.896 Test: blob_rw_verify_iov_nomem ...passed 00:06:52.896 Test: blob_rw_iov_read_only ...passed 00:06:52.896 Test: blob_xattr ...passed 00:06:52.896 Test: blob_dirty_shutdown ...passed 00:06:53.156 Test: blob_is_degraded ...passed 00:06:53.156 Suite: blob_esnap_bs_copy_extent 00:06:53.156 Test: blob_esnap_create ...passed 00:06:53.156 Test: blob_esnap_thread_add_remove ...passed 00:06:53.156 Test: blob_esnap_clone_snapshot ...passed 00:06:53.156 Test: blob_esnap_clone_inflate ...passed 00:06:53.156 Test: blob_esnap_clone_decouple ...passed 00:06:53.156 Test: blob_esnap_clone_reload ...passed 00:06:53.156 Test: blob_esnap_hotplug ...passed 00:06:53.156 00:06:53.156 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.156 suites 16 16 n/a 0 0 00:06:53.156 tests 348 348 348 0 0 00:06:53.156 asserts 92605 92605 92605 0 n/a 00:06:53.156 00:06:53.156 Elapsed time = 11.806 seconds 00:06:53.419 13:30:32 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:53.419 00:06:53.419 00:06:53.419 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.419 http://cunit.sourceforge.net/ 00:06:53.419 00:06:53.419 00:06:53.419 Suite: blob_bdev 00:06:53.419 Test: create_bs_dev ...passed 00:06:53.419 Test: create_bs_dev_ro ...[2024-07-10 13:30:32.596209] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:53.419 passed 00:06:53.419 Test: create_bs_dev_rw ...passed 00:06:53.419 Test: claim_bs_dev ...[2024-07-10 13:30:32.597156] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:53.419 passed 00:06:53.419 Test: claim_bs_dev_ro ...passed 00:06:53.419 Test: deferred_destroy_refs ...passed 00:06:53.419 Test: deferred_destroy_channels ...passed 00:06:53.419 Test: deferred_destroy_threads ...passed 00:06:53.419 00:06:53.419 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.419 suites 1 1 n/a 0 0 00:06:53.419 tests 8 8 8 0 0 00:06:53.419 asserts 119 119 119 0 n/a 00:06:53.419 00:06:53.419 Elapsed time = 0.002 seconds 00:06:53.419 13:30:32 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:53.419 00:06:53.419 00:06:53.419 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.419 http://cunit.sourceforge.net/ 00:06:53.419 00:06:53.419 00:06:53.419 Suite: tree 00:06:53.419 Test: blobfs_tree_op_test ...passed 00:06:53.419 00:06:53.419 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.419 suites 1 1 n/a 0 0 00:06:53.419 tests 1 1 1 0 0 00:06:53.419 asserts 27 27 27 0 n/a 00:06:53.419 00:06:53.419 Elapsed time = 0.000 seconds 00:06:53.419 13:30:32 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:53.419 00:06:53.419 00:06:53.419 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.419 http://cunit.sourceforge.net/ 00:06:53.419 00:06:53.419 00:06:53.419 Suite: blobfs_async_ut 00:06:53.419 Test: fs_init ...passed 00:06:53.419 Test: fs_open ...passed 00:06:53.419 Test: fs_create ...passed 00:06:53.419 Test: fs_truncate ...passed 00:06:53.679 Test: fs_rename ...[2024-07-10 13:30:32.789252] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:53.679 passed 00:06:53.679 Test: fs_rw_async ...passed 00:06:53.679 Test: fs_writev_readv_async ...passed 00:06:53.679 Test: tree_find_buffer_ut ...passed 00:06:53.679 Test: channel_ops ...passed 00:06:53.679 Test: channel_ops_sync ...passed 00:06:53.679 00:06:53.679 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.679 suites 1 1 n/a 0 0 00:06:53.679 tests 10 10 10 0 0 00:06:53.679 asserts 292 292 292 0 n/a 00:06:53.679 00:06:53.679 Elapsed time = 0.168 seconds 00:06:53.679 13:30:32 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:53.679 00:06:53.679 00:06:53.679 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.679 http://cunit.sourceforge.net/ 00:06:53.679 00:06:53.679 00:06:53.679 Suite: blobfs_sync_ut 00:06:53.679 Test: cache_read_after_write ...[2024-07-10 13:30:32.954323] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:53.679 passed 00:06:53.679 Test: file_length ...passed 00:06:53.679 Test: append_write_to_extend_blob ...passed 00:06:53.679 Test: partial_buffer ...passed 00:06:53.679 Test: cache_write_null_buffer ...passed 00:06:53.679 Test: fs_create_sync ...passed 00:06:53.679 Test: fs_rename_sync ...passed 00:06:53.938 Test: cache_append_no_cache ...passed 00:06:53.938 Test: fs_delete_file_without_close ...passed 00:06:53.938 00:06:53.938 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.938 suites 1 1 n/a 0 0 00:06:53.938 tests 9 9 9 0 0 00:06:53.938 asserts 345 345 345 0 n/a 00:06:53.938 00:06:53.938 Elapsed time = 0.298 seconds 00:06:53.938 13:30:33 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:53.938 00:06:53.938 00:06:53.938 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.938 http://cunit.sourceforge.net/ 00:06:53.938 00:06:53.938 00:06:53.938 Suite: blobfs_bdev_ut 00:06:53.938 Test: spdk_blobfs_bdev_detect_test ...[2024-07-10 13:30:33.107173] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:53.938 passed 00:06:53.938 Test: spdk_blobfs_bdev_create_test ...[2024-07-10 13:30:33.107528] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:53.938 passed 00:06:53.938 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:53.938 00:06:53.938 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.938 suites 1 1 n/a 0 0 00:06:53.938 tests 3 3 3 0 0 00:06:53.938 asserts 9 9 9 0 n/a 00:06:53.938 00:06:53.938 Elapsed time = 0.000 seconds 00:06:53.938 ************************************ 00:06:53.938 END TEST unittest_blob_blobfs 00:06:53.938 ************************************ 00:06:53.938 00:06:53.938 real 0m12.526s 00:06:53.938 user 0m12.105s 00:06:53.938 sys 0m0.542s 00:06:53.938 13:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.938 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:53.938 13:30:33 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:53.938 13:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.938 13:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.938 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:53.938 ************************************ 00:06:53.938 START TEST unittest_event 00:06:53.938 ************************************ 00:06:53.938 13:30:33 -- common/autotest_common.sh@1104 -- # unittest_event 00:06:53.938 13:30:33 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:53.938 00:06:53.938 00:06:53.938 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.938 http://cunit.sourceforge.net/ 00:06:53.938 00:06:53.938 00:06:53.939 Suite: app_suite 00:06:53.939 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:53.939 options:app_ut: invalid option -- 'z' 00:06:53.939 00:06:53.939 -c, --config JSON config file (default none) 00:06:53.939 --json JSON config file (default none) 00:06:53.939 --json-ignore-init-errors 00:06:53.939 don't exit on invalid config entry 00:06:53.939 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:53.939 -g, --single-file-segments 00:06:53.939 force creating just one hugetlbfs file 00:06:53.939 -h, --help show this usage 00:06:53.939 -i, --shm-id shared memory ID (optional) 00:06:53.939 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:53.939 --lcores lcore to CPU mapping list. The list is in the format: 00:06:53.939 [<,lcores[@CPUs]>...] 00:06:53.939 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:53.939 Within the group, '-' is used for range separator, 00:06:53.939 ',' is used for single number separator. 00:06:53.939 '( )' can be omitted for single element group, 00:06:53.939 '@' can be omitted if cpus and lcores have the same value 00:06:53.939 -n, --mem-channels channel number of memory channels used for DPDK 00:06:53.939 -p, --main-core main (primary) core for DPDK 00:06:53.939 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:53.939 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:53.939 --disable-cpumask-locks Disable CPU core lock files. 00:06:53.939 --silence-noticelog disable notice level logging to stderr 00:06:53.939 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:53.939 -u, --no-pci disable PCI access 00:06:53.939 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:53.939 --max-delay maximum reactor delay (in microseconds) 00:06:53.939 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:53.939 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:53.939 -R, --huge-unlink unlink huge files after initialization 00:06:53.939 -v, --version print SPDK version 00:06:53.939 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:53.939 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:53.939 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:53.939 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:53.939 Tracepoints vary in size and can use more than one trace entry. 00:06:53.939 --rpcs-allowed comma-separated list of permitted RPCS 00:06:53.939 --env-context Opaque context for use of the env implementation 00:06:53.939 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:53.939 --no-huge run without using hugepages 00:06:53.939 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:53.939 -e, --tpoint-group [:] 00:06:53.939 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:53.939 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:53.939 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:53.939 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:53.939 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:53.939 app_ut: unrecognized option '--test-long-opt' 00:06:53.939 app_ut [options] 00:06:53.939 options: 00:06:53.939 -c, --config JSON config file (default none) 00:06:53.939 --json JSON config file (default none) 00:06:53.939 --json-ignore-init-errors 00:06:53.939 don't exit on invalid config entry 00:06:53.939 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:53.939 -g, --single-file-segments 00:06:53.939 force creating just one hugetlbfs file 00:06:53.939 -h, --help show this usage 00:06:53.939 -i, --shm-id shared memory ID (optional) 00:06:53.939 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:53.939 --lcores lcore to CPU mapping list. The list is in the format: 00:06:53.939 [<,lcores[@CPUs]>...] 00:06:53.939 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:53.939 Within the group, '-' is used for range separator, 00:06:53.939 ',' is used for single number separator. 00:06:53.939 '( )' can be omitted for single element group, 00:06:53.939 '@' can be omitted if cpus and lcores have the same value 00:06:53.939 -n, --mem-channels channel number of memory channels used for DPDK 00:06:53.939 -p, --main-core main (primary) core for DPDK 00:06:53.939 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:53.939 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:53.939 --disable-cpumask-locks Disable CPU core lock files. 00:06:53.939 --silence-noticelog disable notice level logging to stderr 00:06:53.939 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:53.939 -u, --no-pci disable PCI access 00:06:53.939 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:53.939 --max-delay maximum reactor delay (in microseconds) 00:06:53.939 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:53.939 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:53.939 -R, --huge-unlink unlink huge files after initialization 00:06:53.939 -v, --version print SPDK version 00:06:53.939 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:53.939 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:53.939 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:53.939 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:53.939 Tracepoints vary in size and can use more than one trace entry. 00:06:53.939 --rpcs-allowed comma-separated list of permitted RPCS 00:06:53.939 --env-context Opaque context for use of the env implementation 00:06:53.939 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:53.939 --no-huge run without using hugepages 00:06:53.939 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:53.939 -e, --tpoint-group [:] 00:06:53.939 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:53.939 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:53.939 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:53.939 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:53.939 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:53.939 [2024-07-10 13:30:33.202921] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:53.939 [2024-07-10 13:30:33.203305] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:53.939 app_ut [options] 00:06:53.939 options: 00:06:53.939 -c, --config JSON config file (default none) 00:06:53.939 --json JSON config file (default none) 00:06:53.939 --json-ignore-init-errors 00:06:53.939 don't exit on invalid config entry 00:06:53.939 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:53.939 -g, --single-file-segments 00:06:53.939 force creating just one hugetlbfs file 00:06:53.939 -h, --help show this usage 00:06:53.939 -i, --shm-id shared memory ID (optional) 00:06:53.939 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:53.939 --lcores lcore to CPU mapping list. The list is in the format: 00:06:53.939 [<,lcores[@CPUs]>...] 00:06:53.939 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:53.939 Within the group, '-' is used for range separator, 00:06:53.939 ',' is used for single number separator. 00:06:53.939 '( )' can be omitted for single element group, 00:06:53.939 '@' can be omitted if cpus and lcores have the same value 00:06:53.939 -n, --mem-channels channel number of memory channels used for DPDK 00:06:53.939 -p, --main-core main (primary) core for DPDK 00:06:53.939 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:53.939 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:53.939 --disable-cpumask-locks Disable CPU core lock files. 00:06:53.939 --silence-noticelog disable notice level logging to stderr 00:06:53.939 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:53.939 -u, --no-pci disable PCI access 00:06:53.939 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:53.939 --max-delay maximum reactor delay (in microseconds) 00:06:53.939 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:53.939 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:53.939 -R, --huge-unlink unlink huge files after initialization 00:06:53.939 -v, --version print SPDK version 00:06:53.939 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:53.939 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:53.939 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:53.939 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:53.939 Tracepoints vary in size and can use more than one trace entry. 00:06:53.939 --rpcs-allowed comma-separated list of permitted RPCS 00:06:53.939 --env-context Opaque context for use of the env implementation 00:06:53.939 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:53.939 --no-huge run without using hugepages 00:06:53.939 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:53.939 -e, --tpoint-group [:] 00:06:53.939 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:53.939 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:53.940 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:53.940 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:53.940 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:53.940 [2024-07-10 13:30:33.205837] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:53.940 passed 00:06:53.940 00:06:53.940 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.940 suites 1 1 n/a 0 0 00:06:53.940 tests 1 1 1 0 0 00:06:53.940 asserts 8 8 8 0 n/a 00:06:53.940 00:06:53.940 Elapsed time = 0.002 seconds 00:06:53.940 13:30:33 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:53.940 00:06:53.940 00:06:53.940 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.940 http://cunit.sourceforge.net/ 00:06:53.940 00:06:53.940 00:06:53.940 Suite: app_suite 00:06:53.940 Test: test_create_reactor ...passed 00:06:53.940 Test: test_init_reactors ...passed 00:06:53.940 Test: test_event_call ...passed 00:06:53.940 Test: test_schedule_thread ...passed 00:06:53.940 Test: test_reschedule_thread ...passed 00:06:53.940 Test: test_bind_thread ...passed 00:06:53.940 Test: test_for_each_reactor ...passed 00:06:53.940 Test: test_reactor_stats ...passed 00:06:53.940 Test: test_scheduler ...passed 00:06:53.940 Test: test_governor ...passed 00:06:53.940 00:06:53.940 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.940 suites 1 1 n/a 0 0 00:06:53.940 tests 10 10 10 0 0 00:06:53.940 asserts 344 344 344 0 n/a 00:06:53.940 00:06:53.940 Elapsed time = 0.017 seconds 00:06:53.940 00:06:53.940 real 0m0.109s 00:06:53.940 user 0m0.051s 00:06:53.940 sys 0m0.052s 00:06:53.940 13:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.940 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:53.940 ************************************ 00:06:53.940 END TEST unittest_event 00:06:53.940 ************************************ 00:06:54.198 13:30:33 -- unit/unittest.sh@233 -- # uname -s 00:06:54.198 13:30:33 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:54.198 13:30:33 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:54.198 13:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:54.198 13:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.198 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 ************************************ 00:06:54.198 START TEST unittest_ftl 00:06:54.198 ************************************ 00:06:54.198 13:30:33 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:06:54.198 13:30:33 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:54.198 00:06:54.198 00:06:54.198 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.198 http://cunit.sourceforge.net/ 00:06:54.198 00:06:54.198 00:06:54.198 Suite: ftl_band_suite 00:06:54.198 Test: test_band_block_offset_from_addr_base ...passed 00:06:54.198 Test: test_band_block_offset_from_addr_offset ...passed 00:06:54.198 Test: test_band_addr_from_block_offset ...passed 00:06:54.198 Test: test_band_set_addr ...passed 00:06:54.198 Test: test_invalidate_addr ...passed 00:06:54.198 Test: test_next_xfer_addr ...passed 00:06:54.198 00:06:54.198 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.198 suites 1 1 n/a 0 0 00:06:54.198 tests 6 6 6 0 0 00:06:54.198 asserts 30356 30356 30356 0 n/a 00:06:54.198 00:06:54.198 Elapsed time = 0.121 seconds 00:06:54.198 13:30:33 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:54.459 00:06:54.459 00:06:54.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.459 http://cunit.sourceforge.net/ 00:06:54.459 00:06:54.459 00:06:54.459 Suite: ftl_bitmap 00:06:54.459 Test: test_ftl_bitmap_create ...[2024-07-10 13:30:33.574432] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:54.459 [2024-07-10 13:30:33.574913] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:54.459 passed 00:06:54.459 Test: test_ftl_bitmap_get ...passed 00:06:54.459 Test: test_ftl_bitmap_set ...passed 00:06:54.459 Test: test_ftl_bitmap_clear ...passed 00:06:54.459 Test: test_ftl_bitmap_find_first_set ...passed 00:06:54.459 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:54.459 Test: test_ftl_bitmap_count_set ...passed 00:06:54.459 00:06:54.459 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.459 suites 1 1 n/a 0 0 00:06:54.459 tests 7 7 7 0 0 00:06:54.459 asserts 137 137 137 0 n/a 00:06:54.459 00:06:54.459 Elapsed time = 0.002 seconds 00:06:54.459 13:30:33 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:54.459 00:06:54.459 00:06:54.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.459 http://cunit.sourceforge.net/ 00:06:54.459 00:06:54.459 00:06:54.459 Suite: ftl_io_suite 00:06:54.459 Test: test_completion ...passed 00:06:54.459 Test: test_multiple_ios ...passed 00:06:54.459 00:06:54.459 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.459 suites 1 1 n/a 0 0 00:06:54.459 tests 2 2 2 0 0 00:06:54.459 asserts 47 47 47 0 n/a 00:06:54.459 00:06:54.459 Elapsed time = 0.003 seconds 00:06:54.459 13:30:33 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:54.459 00:06:54.459 00:06:54.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.459 http://cunit.sourceforge.net/ 00:06:54.459 00:06:54.459 00:06:54.459 Suite: ftl_mngt 00:06:54.459 Test: test_next_step ...passed 00:06:54.459 Test: test_continue_step ...passed 00:06:54.459 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:54.459 Test: test_fail_step ...passed 00:06:54.459 Test: test_mngt_call_and_call_rollback ...passed 00:06:54.459 Test: test_nested_process_failure ...passed 00:06:54.459 00:06:54.459 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.459 suites 1 1 n/a 0 0 00:06:54.459 tests 6 6 6 0 0 00:06:54.459 asserts 176 176 176 0 n/a 00:06:54.459 00:06:54.459 Elapsed time = 0.001 seconds 00:06:54.459 13:30:33 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:54.459 00:06:54.459 00:06:54.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.459 http://cunit.sourceforge.net/ 00:06:54.459 00:06:54.459 00:06:54.459 Suite: ftl_mempool 00:06:54.459 Test: test_ftl_mempool_create ...passed 00:06:54.459 Test: test_ftl_mempool_get_put ...passed 00:06:54.459 00:06:54.459 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.459 suites 1 1 n/a 0 0 00:06:54.459 tests 2 2 2 0 0 00:06:54.459 asserts 36 36 36 0 n/a 00:06:54.459 00:06:54.459 Elapsed time = 0.000 seconds 00:06:54.459 13:30:33 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:54.459 00:06:54.459 00:06:54.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.459 http://cunit.sourceforge.net/ 00:06:54.459 00:06:54.459 00:06:54.459 Suite: ftl_addr64_suite 00:06:54.459 Test: test_addr_cached ...passed 00:06:54.459 00:06:54.459 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.459 suites 1 1 n/a 0 0 00:06:54.459 tests 1 1 1 0 0 00:06:54.459 asserts 1536 1536 1536 0 n/a 00:06:54.459 00:06:54.459 Elapsed time = 0.001 seconds 00:06:54.459 13:30:33 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:54.459 00:06:54.459 00:06:54.459 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.459 http://cunit.sourceforge.net/ 00:06:54.459 00:06:54.459 00:06:54.459 Suite: ftl_sb 00:06:54.459 Test: test_sb_crc_v2 ...passed 00:06:54.459 Test: test_sb_crc_v3 ...passed 00:06:54.459 Test: test_sb_v3_md_layout ...[2024-07-10 13:30:33.790780] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:54.459 [2024-07-10 13:30:33.791334] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:54.459 [2024-07-10 13:30:33.791446] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:54.459 [2024-07-10 13:30:33.791537] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:54.459 [2024-07-10 13:30:33.791618] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:54.459 [2024-07-10 13:30:33.791763] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:54.459 [2024-07-10 13:30:33.791846] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:54.459 [2024-07-10 13:30:33.791951] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:54.459 [2024-07-10 13:30:33.792203] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:54.459 [2024-07-10 13:30:33.792302] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:54.459 [2024-07-10 13:30:33.792383] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:54.459 passed 00:06:54.459 Test: test_sb_v5_md_layout ...passed 00:06:54.459 00:06:54.459 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.459 suites 1 1 n/a 0 0 00:06:54.459 tests 4 4 4 0 0 00:06:54.459 asserts 148 148 148 0 n/a 00:06:54.459 00:06:54.459 Elapsed time = 0.003 seconds 00:06:54.459 13:30:33 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:54.719 00:06:54.719 00:06:54.719 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.719 http://cunit.sourceforge.net/ 00:06:54.719 00:06:54.719 00:06:54.719 Suite: ftl_layout_upgrade 00:06:54.719 Test: test_l2p_upgrade ...passed 00:06:54.719 00:06:54.719 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.719 suites 1 1 n/a 0 0 00:06:54.719 tests 1 1 1 0 0 00:06:54.719 asserts 140 140 140 0 n/a 00:06:54.719 00:06:54.719 Elapsed time = 0.001 seconds 00:06:54.719 00:06:54.719 real 0m0.519s 00:06:54.719 user 0m0.274s 00:06:54.719 sys 0m0.244s 00:06:54.719 13:30:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.719 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:54.720 ************************************ 00:06:54.720 END TEST unittest_ftl 00:06:54.720 ************************************ 00:06:54.720 13:30:33 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:54.720 13:30:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:54.720 13:30:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.720 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:06:54.720 ************************************ 00:06:54.720 START TEST unittest_accel 00:06:54.720 ************************************ 00:06:54.720 13:30:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:54.720 00:06:54.720 00:06:54.720 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.720 http://cunit.sourceforge.net/ 00:06:54.720 00:06:54.720 00:06:54.720 Suite: accel_sequence 00:06:54.720 Test: test_sequence_fill_copy ...passed 00:06:54.720 Test: test_sequence_abort ...passed 00:06:54.720 Test: test_sequence_append_error ...passed 00:06:54.720 Test: test_sequence_completion_error ...[2024-07-10 13:30:33.967112] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fc52f9c27c0 00:06:54.720 [2024-07-10 13:30:33.967451] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fc52f9c27c0 00:06:54.720 [2024-07-10 13:30:33.967518] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fc52f9c27c0 00:06:54.720 [2024-07-10 13:30:33.967586] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fc52f9c27c0 00:06:54.720 passed 00:06:54.720 Test: test_sequence_decompress ...passed 00:06:54.720 Test: test_sequence_reverse ...passed 00:06:54.720 Test: test_sequence_copy_elision ...passed 00:06:54.720 Test: test_sequence_accel_buffers ...passed 00:06:54.720 Test: test_sequence_memory_domain ...[2024-07-10 13:30:33.976051] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:54.720 [2024-07-10 13:30:33.976257] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:54.720 passed 00:06:54.720 Test: test_sequence_module_memory_domain ...passed 00:06:54.720 Test: test_sequence_crypto ...passed 00:06:54.720 Test: test_sequence_driver ...[2024-07-10 13:30:33.980745] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fc52ed9a7c0 using driver: ut 00:06:54.720 [2024-07-10 13:30:33.980841] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fc52ed9a7c0 through driver: ut 00:06:54.720 passed 00:06:54.720 Test: test_sequence_same_iovs ...passed 00:06:54.720 Test: test_sequence_crc32 ...passed 00:06:54.720 Suite: accel 00:06:54.720 Test: test_spdk_accel_task_complete ...passed 00:06:54.720 Test: test_get_task ...passed 00:06:54.720 Test: test_spdk_accel_submit_copy ...passed 00:06:54.720 Test: test_spdk_accel_submit_dualcast ...[2024-07-10 13:30:33.984266] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:54.720 [2024-07-10 13:30:33.984326] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:54.720 passed 00:06:54.720 Test: test_spdk_accel_submit_compare ...passed 00:06:54.720 Test: test_spdk_accel_submit_fill ...passed 00:06:54.720 Test: test_spdk_accel_submit_crc32c ...passed 00:06:54.720 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:54.720 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:54.720 Test: test_spdk_accel_submit_xor ...passed 00:06:54.720 Test: test_spdk_accel_module_find_by_name ...passed 00:06:54.720 Test: test_spdk_accel_module_register ...passed 00:06:54.720 00:06:54.720 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.720 suites 2 2 n/a 0 0 00:06:54.720 tests 26 26 26 0 0 00:06:54.720 asserts 831 831 831 0 n/a 00:06:54.720 00:06:54.720 Elapsed time = 0.027 seconds 00:06:54.720 00:06:54.720 real 0m0.079s 00:06:54.720 user 0m0.029s 00:06:54.720 sys 0m0.050s 00:06:54.720 13:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.720 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.720 ************************************ 00:06:54.720 END TEST unittest_accel 00:06:54.720 ************************************ 00:06:54.720 13:30:34 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:54.720 13:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:54.720 13:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.720 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.720 ************************************ 00:06:54.720 START TEST unittest_ioat 00:06:54.720 ************************************ 00:06:54.720 13:30:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:54.979 00:06:54.979 00:06:54.979 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.979 http://cunit.sourceforge.net/ 00:06:54.979 00:06:54.979 00:06:54.979 Suite: ioat 00:06:54.979 Test: ioat_state_check ...passed 00:06:54.979 00:06:54.979 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.979 suites 1 1 n/a 0 0 00:06:54.979 tests 1 1 1 0 0 00:06:54.979 asserts 32 32 32 0 n/a 00:06:54.979 00:06:54.979 Elapsed time = 0.000 seconds 00:06:54.979 00:06:54.979 real 0m0.045s 00:06:54.979 user 0m0.020s 00:06:54.979 sys 0m0.025s 00:06:54.979 13:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.979 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.979 ************************************ 00:06:54.979 END TEST unittest_ioat 00:06:54.979 ************************************ 00:06:54.980 13:30:34 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:54.980 13:30:34 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:54.980 13:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:54.980 13:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.980 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.980 ************************************ 00:06:54.980 START TEST unittest_idxd_user 00:06:54.980 ************************************ 00:06:54.980 13:30:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:54.980 00:06:54.980 00:06:54.980 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.980 http://cunit.sourceforge.net/ 00:06:54.980 00:06:54.980 00:06:54.980 Suite: idxd_user 00:06:54.980 Test: test_idxd_wait_cmd ...[2024-07-10 13:30:34.216140] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:54.980 [2024-07-10 13:30:34.216561] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:54.980 passed 00:06:54.980 Test: test_idxd_reset_dev ...[2024-07-10 13:30:34.216853] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:54.980 [2024-07-10 13:30:34.216953] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:54.980 passed 00:06:54.980 Test: test_idxd_group_config ...passed 00:06:54.980 Test: test_idxd_wq_config ...passed 00:06:54.980 00:06:54.980 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.980 suites 1 1 n/a 0 0 00:06:54.980 tests 4 4 4 0 0 00:06:54.980 asserts 20 20 20 0 n/a 00:06:54.980 00:06:54.980 Elapsed time = 0.001 seconds 00:06:54.980 00:06:54.980 real 0m0.043s 00:06:54.980 user 0m0.019s 00:06:54.980 sys 0m0.023s 00:06:54.980 13:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.980 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.980 ************************************ 00:06:54.980 END TEST unittest_idxd_user 00:06:54.980 ************************************ 00:06:54.980 13:30:34 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:54.980 13:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:54.980 13:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.980 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:54.980 ************************************ 00:06:54.980 START TEST unittest_iscsi 00:06:54.980 ************************************ 00:06:54.980 13:30:34 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:06:54.980 13:30:34 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:54.980 00:06:54.980 00:06:54.980 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.980 http://cunit.sourceforge.net/ 00:06:54.980 00:06:54.980 00:06:54.980 Suite: conn_suite 00:06:54.980 Test: read_task_split_in_order_case ...passed 00:06:54.980 Test: read_task_split_reverse_order_case ...passed 00:06:54.980 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:54.980 Test: process_non_read_task_completion_test ...passed 00:06:54.980 Test: free_tasks_on_connection ...passed 00:06:54.980 Test: free_tasks_with_queued_datain ...passed 00:06:54.980 Test: abort_queued_datain_task_test ...passed 00:06:54.980 Test: abort_queued_datain_tasks_test ...passed 00:06:54.980 00:06:54.980 Run Summary: Type Total Ran Passed Failed Inactive 00:06:54.980 suites 1 1 n/a 0 0 00:06:54.980 tests 8 8 8 0 0 00:06:54.980 asserts 230 230 230 0 n/a 00:06:54.980 00:06:54.980 Elapsed time = 0.000 seconds 00:06:55.241 13:30:34 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:55.241 00:06:55.241 00:06:55.241 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.241 http://cunit.sourceforge.net/ 00:06:55.241 00:06:55.241 00:06:55.241 Suite: iscsi_suite 00:06:55.241 Test: param_negotiation_test ...passed 00:06:55.241 Test: list_negotiation_test ...passed 00:06:55.241 Test: parse_valid_test ...passed 00:06:55.241 Test: parse_invalid_test ...[2024-07-10 13:30:34.378982] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:55.241 [2024-07-10 13:30:34.379436] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:55.241 [2024-07-10 13:30:34.379555] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:55.241 [2024-07-10 13:30:34.379689] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:55.241 [2024-07-10 13:30:34.379928] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:55.241 [2024-07-10 13:30:34.380049] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:55.241 [2024-07-10 13:30:34.380295] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:55.241 passed 00:06:55.241 00:06:55.241 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.241 suites 1 1 n/a 0 0 00:06:55.241 tests 4 4 4 0 0 00:06:55.241 asserts 161 161 161 0 n/a 00:06:55.241 00:06:55.241 Elapsed time = 0.007 seconds 00:06:55.241 13:30:34 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:55.241 00:06:55.241 00:06:55.241 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.241 http://cunit.sourceforge.net/ 00:06:55.241 00:06:55.241 00:06:55.241 Suite: iscsi_target_node_suite 00:06:55.241 Test: add_lun_test_cases ...[2024-07-10 13:30:34.417906] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:55.241 [2024-07-10 13:30:34.418282] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:55.241 [2024-07-10 13:30:34.418414] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:55.241 [2024-07-10 13:30:34.418496] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:55.241 [2024-07-10 13:30:34.418564] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:55.241 passed 00:06:55.241 Test: allow_any_allowed ...passed 00:06:55.241 Test: allow_ipv6_allowed ...passed 00:06:55.241 Test: allow_ipv6_denied ...passed 00:06:55.241 Test: allow_ipv6_invalid ...passed 00:06:55.241 Test: allow_ipv4_allowed ...passed 00:06:55.241 Test: allow_ipv4_denied ...passed 00:06:55.241 Test: allow_ipv4_invalid ...passed 00:06:55.241 Test: node_access_allowed ...passed 00:06:55.241 Test: node_access_denied_by_empty_netmask ...passed 00:06:55.241 Test: node_access_multi_initiator_groups_cases ...passed 00:06:55.241 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:55.241 Test: chap_param_test_cases ...[2024-07-10 13:30:34.419825] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:55.241 [2024-07-10 13:30:34.419909] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:55.241 [2024-07-10 13:30:34.419996] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:55.241 [2024-07-10 13:30:34.420058] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:55.241 [2024-07-10 13:30:34.420160] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:55.241 passed 00:06:55.241 00:06:55.241 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.241 suites 1 1 n/a 0 0 00:06:55.241 tests 13 13 13 0 0 00:06:55.241 asserts 50 50 50 0 n/a 00:06:55.241 00:06:55.241 Elapsed time = 0.001 seconds 00:06:55.241 13:30:34 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:55.241 00:06:55.241 00:06:55.241 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.241 http://cunit.sourceforge.net/ 00:06:55.241 00:06:55.241 00:06:55.241 Suite: iscsi_suite 00:06:55.241 Test: op_login_check_target_test ...[2024-07-10 13:30:34.471752] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:55.241 passed 00:06:55.241 Test: op_login_session_normal_test ...[2024-07-10 13:30:34.472388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:55.242 [2024-07-10 13:30:34.472494] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:55.242 [2024-07-10 13:30:34.472582] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:55.242 [2024-07-10 13:30:34.472684] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:55.242 [2024-07-10 13:30:34.472859] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:55.242 [2024-07-10 13:30:34.473033] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:55.242 [2024-07-10 13:30:34.473153] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:55.242 passed 00:06:55.242 Test: maxburstlength_test ...[2024-07-10 13:30:34.473549] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:55.242 [2024-07-10 13:30:34.473625] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:55.242 passed 00:06:55.242 Test: underflow_for_read_transfer_test ...passed 00:06:55.242 Test: underflow_for_zero_read_transfer_test ...passed 00:06:55.242 Test: underflow_for_request_sense_test ...passed 00:06:55.242 Test: underflow_for_check_condition_test ...passed 00:06:55.242 Test: add_transfer_task_test ...passed 00:06:55.242 Test: get_transfer_task_test ...passed 00:06:55.242 Test: del_transfer_task_test ...passed 00:06:55.242 Test: clear_all_transfer_tasks_test ...passed 00:06:55.242 Test: build_iovs_test ...passed 00:06:55.242 Test: build_iovs_with_md_test ...passed 00:06:55.242 Test: pdu_hdr_op_login_test ...[2024-07-10 13:30:34.475771] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:55.242 [2024-07-10 13:30:34.475906] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:55.242 [2024-07-10 13:30:34.476021] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:55.242 passed 00:06:55.242 Test: pdu_hdr_op_text_test ...[2024-07-10 13:30:34.476214] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:55.242 [2024-07-10 13:30:34.476348] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:55.242 [2024-07-10 13:30:34.476420] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:55.242 passed 00:06:55.242 Test: pdu_hdr_op_logout_test ...[2024-07-10 13:30:34.476579] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:55.242 passed 00:06:55.242 Test: pdu_hdr_op_scsi_test ...[2024-07-10 13:30:34.476823] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:55.242 [2024-07-10 13:30:34.476888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:55.242 [2024-07-10 13:30:34.476965] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:55.242 [2024-07-10 13:30:34.477084] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:55.242 [2024-07-10 13:30:34.477204] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:55.242 [2024-07-10 13:30:34.477420] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:55.242 passed 00:06:55.242 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-10 13:30:34.477617] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:55.242 [2024-07-10 13:30:34.477730] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:55.242 passed 00:06:55.242 Test: pdu_hdr_op_nopout_test ...[2024-07-10 13:30:34.478017] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:55.242 [2024-07-10 13:30:34.478138] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:55.242 [2024-07-10 13:30:34.478196] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:55.242 [2024-07-10 13:30:34.478255] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:55.242 passed 00:06:55.242 Test: pdu_hdr_op_data_test ...[2024-07-10 13:30:34.478365] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:55.242 [2024-07-10 13:30:34.478471] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:55.242 [2024-07-10 13:30:34.478559] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:55.242 [2024-07-10 13:30:34.478637] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:55.242 [2024-07-10 13:30:34.478715] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:55.242 [2024-07-10 13:30:34.478824] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:55.242 [2024-07-10 13:30:34.478885] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:55.242 passed 00:06:55.242 Test: empty_text_with_cbit_test ...passed 00:06:55.242 Test: pdu_payload_read_test ...[2024-07-10 13:30:34.481300] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:55.242 passed 00:06:55.242 Test: data_out_pdu_sequence_test ...passed 00:06:55.242 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:55.242 00:06:55.242 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.242 suites 1 1 n/a 0 0 00:06:55.242 tests 24 24 24 0 0 00:06:55.242 asserts 150253 150253 150253 0 n/a 00:06:55.242 00:06:55.242 Elapsed time = 0.015 seconds 00:06:55.242 13:30:34 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:55.242 00:06:55.242 00:06:55.242 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.242 http://cunit.sourceforge.net/ 00:06:55.242 00:06:55.242 00:06:55.242 Suite: init_grp_suite 00:06:55.242 Test: create_initiator_group_success_case ...passed 00:06:55.242 Test: find_initiator_group_success_case ...passed 00:06:55.242 Test: register_initiator_group_twice_case ...passed 00:06:55.242 Test: add_initiator_name_success_case ...passed 00:06:55.242 Test: add_initiator_name_fail_case ...[2024-07-10 13:30:34.528285] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:55.242 passed 00:06:55.242 Test: delete_all_initiator_names_success_case ...passed 00:06:55.242 Test: add_netmask_success_case ...passed 00:06:55.242 Test: add_netmask_fail_case ...[2024-07-10 13:30:34.529039] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:55.242 passed 00:06:55.242 Test: delete_all_netmasks_success_case ...passed 00:06:55.242 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:55.242 Test: netmask_overwrite_all_to_any_case ...passed 00:06:55.242 Test: add_delete_initiator_names_case ...passed 00:06:55.242 Test: add_duplicated_initiator_names_case ...passed 00:06:55.242 Test: delete_nonexisting_initiator_names_case ...passed 00:06:55.242 Test: add_delete_netmasks_case ...passed 00:06:55.242 Test: add_duplicated_netmasks_case ...passed 00:06:55.242 Test: delete_nonexisting_netmasks_case ...passed 00:06:55.242 00:06:55.242 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.242 suites 1 1 n/a 0 0 00:06:55.242 tests 17 17 17 0 0 00:06:55.242 asserts 108 108 108 0 n/a 00:06:55.242 00:06:55.242 Elapsed time = 0.001 seconds 00:06:55.242 13:30:34 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:55.242 00:06:55.242 00:06:55.242 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.242 http://cunit.sourceforge.net/ 00:06:55.242 00:06:55.242 00:06:55.242 Suite: portal_grp_suite 00:06:55.242 Test: portal_create_ipv4_normal_case ...passed 00:06:55.242 Test: portal_create_ipv6_normal_case ...passed 00:06:55.242 Test: portal_create_ipv4_wildcard_case ...passed 00:06:55.242 Test: portal_create_ipv6_wildcard_case ...passed 00:06:55.242 Test: portal_create_twice_case ...[2024-07-10 13:30:34.571686] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:55.242 passed 00:06:55.242 Test: portal_grp_register_unregister_case ...passed 00:06:55.242 Test: portal_grp_register_twice_case ...passed 00:06:55.242 Test: portal_grp_add_delete_case ...passed 00:06:55.242 Test: portal_grp_add_delete_twice_case ...passed 00:06:55.242 00:06:55.242 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.242 suites 1 1 n/a 0 0 00:06:55.242 tests 9 9 9 0 0 00:06:55.242 asserts 44 44 44 0 n/a 00:06:55.242 00:06:55.242 Elapsed time = 0.004 seconds 00:06:55.242 00:06:55.242 real 0m0.301s 00:06:55.242 user 0m0.181s 00:06:55.242 sys 0m0.115s 00:06:55.242 13:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.242 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.242 ************************************ 00:06:55.242 END TEST unittest_iscsi 00:06:55.242 ************************************ 00:06:55.503 13:30:34 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:55.503 13:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.503 13:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.503 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.503 ************************************ 00:06:55.503 START TEST unittest_json 00:06:55.503 ************************************ 00:06:55.503 13:30:34 -- common/autotest_common.sh@1104 -- # unittest_json 00:06:55.503 13:30:34 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:55.503 00:06:55.503 00:06:55.503 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.503 http://cunit.sourceforge.net/ 00:06:55.503 00:06:55.503 00:06:55.503 Suite: json 00:06:55.503 Test: test_parse_literal ...passed 00:06:55.503 Test: test_parse_string_simple ...passed 00:06:55.503 Test: test_parse_string_control_chars ...passed 00:06:55.503 Test: test_parse_string_utf8 ...passed 00:06:55.503 Test: test_parse_string_escapes_twochar ...passed 00:06:55.503 Test: test_parse_string_escapes_unicode ...passed 00:06:55.503 Test: test_parse_number ...passed 00:06:55.503 Test: test_parse_array ...passed 00:06:55.503 Test: test_parse_object ...passed 00:06:55.503 Test: test_parse_nesting ...passed 00:06:55.503 Test: test_parse_comment ...passed 00:06:55.503 00:06:55.503 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.503 suites 1 1 n/a 0 0 00:06:55.503 tests 11 11 11 0 0 00:06:55.503 asserts 1516 1516 1516 0 n/a 00:06:55.503 00:06:55.503 Elapsed time = 0.002 seconds 00:06:55.503 13:30:34 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:55.503 00:06:55.503 00:06:55.503 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.503 http://cunit.sourceforge.net/ 00:06:55.503 00:06:55.503 00:06:55.503 Suite: json 00:06:55.503 Test: test_strequal ...passed 00:06:55.503 Test: test_num_to_uint16 ...passed 00:06:55.503 Test: test_num_to_int32 ...passed 00:06:55.503 Test: test_num_to_uint64 ...passed 00:06:55.503 Test: test_decode_object ...passed 00:06:55.503 Test: test_decode_array ...passed 00:06:55.503 Test: test_decode_bool ...passed 00:06:55.503 Test: test_decode_uint16 ...passed 00:06:55.503 Test: test_decode_int32 ...passed 00:06:55.503 Test: test_decode_uint32 ...passed 00:06:55.503 Test: test_decode_uint64 ...passed 00:06:55.503 Test: test_decode_string ...passed 00:06:55.503 Test: test_decode_uuid ...passed 00:06:55.503 Test: test_find ...passed 00:06:55.503 Test: test_find_array ...passed 00:06:55.503 Test: test_iterating ...passed 00:06:55.503 Test: test_free_object ...passed 00:06:55.503 00:06:55.503 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.503 suites 1 1 n/a 0 0 00:06:55.503 tests 17 17 17 0 0 00:06:55.503 asserts 236 236 236 0 n/a 00:06:55.503 00:06:55.503 Elapsed time = 0.001 seconds 00:06:55.503 13:30:34 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:55.503 00:06:55.503 00:06:55.503 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.503 http://cunit.sourceforge.net/ 00:06:55.503 00:06:55.503 00:06:55.503 Suite: json 00:06:55.503 Test: test_write_literal ...passed 00:06:55.503 Test: test_write_string_simple ...passed 00:06:55.503 Test: test_write_string_escapes ...passed 00:06:55.503 Test: test_write_string_utf16le ...passed 00:06:55.503 Test: test_write_number_int32 ...passed 00:06:55.503 Test: test_write_number_uint32 ...passed 00:06:55.503 Test: test_write_number_uint128 ...passed 00:06:55.503 Test: test_write_string_number_uint128 ...passed 00:06:55.503 Test: test_write_number_int64 ...passed 00:06:55.503 Test: test_write_number_uint64 ...passed 00:06:55.503 Test: test_write_number_double ...passed 00:06:55.503 Test: test_write_uuid ...passed 00:06:55.503 Test: test_write_array ...passed 00:06:55.503 Test: test_write_object ...passed 00:06:55.503 Test: test_write_nesting ...passed 00:06:55.504 Test: test_write_val ...passed 00:06:55.504 00:06:55.504 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.504 suites 1 1 n/a 0 0 00:06:55.504 tests 16 16 16 0 0 00:06:55.504 asserts 918 918 918 0 n/a 00:06:55.504 00:06:55.504 Elapsed time = 0.006 seconds 00:06:55.504 13:30:34 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:55.504 00:06:55.504 00:06:55.504 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.504 http://cunit.sourceforge.net/ 00:06:55.504 00:06:55.504 00:06:55.504 Suite: jsonrpc 00:06:55.504 Test: test_parse_request ...passed 00:06:55.504 Test: test_parse_request_streaming ...passed 00:06:55.504 00:06:55.504 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.504 suites 1 1 n/a 0 0 00:06:55.504 tests 2 2 2 0 0 00:06:55.504 asserts 289 289 289 0 n/a 00:06:55.504 00:06:55.504 Elapsed time = 0.005 seconds 00:06:55.504 00:06:55.504 real 0m0.189s 00:06:55.504 user 0m0.118s 00:06:55.504 sys 0m0.069s 00:06:55.504 13:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.504 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.504 ************************************ 00:06:55.504 END TEST unittest_json 00:06:55.504 ************************************ 00:06:55.765 13:30:34 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:55.765 13:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.765 13:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.765 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.765 ************************************ 00:06:55.765 START TEST unittest_rpc 00:06:55.765 ************************************ 00:06:55.765 13:30:34 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:06:55.765 13:30:34 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:55.765 00:06:55.765 00:06:55.765 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.765 http://cunit.sourceforge.net/ 00:06:55.765 00:06:55.765 00:06:55.765 Suite: rpc 00:06:55.765 Test: test_jsonrpc_handler ...passed 00:06:55.765 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:55.765 Test: test_rpc_get_methods ...[2024-07-10 13:30:34.937224] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:55.765 passed 00:06:55.765 Test: test_rpc_spdk_get_version ...passed 00:06:55.765 Test: test_spdk_rpc_listen_close ...passed 00:06:55.765 00:06:55.765 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.765 suites 1 1 n/a 0 0 00:06:55.765 tests 5 5 5 0 0 00:06:55.765 asserts 20 20 20 0 n/a 00:06:55.765 00:06:55.765 Elapsed time = 0.001 seconds 00:06:55.765 00:06:55.765 real 0m0.046s 00:06:55.765 user 0m0.032s 00:06:55.765 sys 0m0.013s 00:06:55.765 13:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.765 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:06:55.765 ************************************ 00:06:55.765 END TEST unittest_rpc 00:06:55.765 ************************************ 00:06:55.765 13:30:35 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:55.765 13:30:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.765 13:30:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.765 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:55.765 ************************************ 00:06:55.765 START TEST unittest_notify 00:06:55.765 ************************************ 00:06:55.765 13:30:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:55.765 00:06:55.765 00:06:55.765 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.765 http://cunit.sourceforge.net/ 00:06:55.765 00:06:55.765 00:06:55.765 Suite: app_suite 00:06:55.765 Test: notify ...passed 00:06:55.765 00:06:55.765 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.765 suites 1 1 n/a 0 0 00:06:55.765 tests 1 1 1 0 0 00:06:55.765 asserts 13 13 13 0 n/a 00:06:55.765 00:06:55.765 Elapsed time = 0.000 seconds 00:06:55.765 00:06:55.765 real 0m0.042s 00:06:55.765 user 0m0.025s 00:06:55.765 sys 0m0.017s 00:06:55.765 13:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.765 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:55.765 ************************************ 00:06:55.765 END TEST unittest_notify 00:06:55.765 ************************************ 00:06:55.765 13:30:35 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:55.765 13:30:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.765 13:30:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.765 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.025 ************************************ 00:06:56.025 START TEST unittest_nvme 00:06:56.025 ************************************ 00:06:56.025 13:30:35 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:06:56.025 13:30:35 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:56.025 00:06:56.025 00:06:56.025 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.025 http://cunit.sourceforge.net/ 00:06:56.026 00:06:56.026 00:06:56.026 Suite: nvme 00:06:56.026 Test: test_opc_data_transfer ...passed 00:06:56.026 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:56.026 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:56.026 Test: test_trid_parse_and_compare ...[2024-07-10 13:30:35.154860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:56.026 [2024-07-10 13:30:35.155893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:56.026 [2024-07-10 13:30:35.156276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:56.026 [2024-07-10 13:30:35.156511] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:56.026 [2024-07-10 13:30:35.156704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:56.026 [2024-07-10 13:30:35.156983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:56.026 passed 00:06:56.026 Test: test_trid_trtype_str ...passed 00:06:56.026 Test: test_trid_adrfam_str ...passed 00:06:56.026 Test: test_nvme_ctrlr_probe ...[2024-07-10 13:30:35.157658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:56.026 passed 00:06:56.026 Test: test_spdk_nvme_probe ...[2024-07-10 13:30:35.158046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:56.026 [2024-07-10 13:30:35.158267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:56.026 [2024-07-10 13:30:35.158560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:56.026 [2024-07-10 13:30:35.158775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:56.026 passed 00:06:56.026 Test: test_spdk_nvme_connect ...[2024-07-10 13:30:35.159164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:56.026 [2024-07-10 13:30:35.159842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:56.026 [2024-07-10 13:30:35.160110] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_probe_internal ...[2024-07-10 13:30:35.160581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:56.026 [2024-07-10 13:30:35.160794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:56.026 passed 00:06:56.026 Test: test_nvme_init_controllers ...[2024-07-10 13:30:35.161143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:56.026 passed 00:06:56.026 Test: test_nvme_driver_init ...[2024-07-10 13:30:35.161456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:56.026 [2024-07-10 13:30:35.161593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:56.026 [2024-07-10 13:30:35.270471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:56.026 [2024-07-10 13:30:35.271314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:56.026 passed 00:06:56.026 Test: test_spdk_nvme_detach ...passed 00:06:56.026 Test: test_nvme_completion_poll_cb ...passed 00:06:56.026 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:56.026 Test: test_nvme_allocate_request_null ...passed 00:06:56.026 Test: test_nvme_allocate_request ...passed 00:06:56.026 Test: test_nvme_free_request ...passed 00:06:56.026 Test: test_nvme_allocate_request_user_copy ...passed 00:06:56.026 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:56.026 Test: test_nvme_request_check_timeout ...passed 00:06:56.026 Test: test_nvme_wait_for_completion ...passed 00:06:56.026 Test: test_spdk_nvme_parse_func ...passed 00:06:56.026 Test: test_spdk_nvme_detach_async ...passed 00:06:56.026 Test: test_nvme_parse_addr ...[2024-07-10 13:30:35.274662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:56.026 passed 00:06:56.026 00:06:56.026 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.026 suites 1 1 n/a 0 0 00:06:56.026 tests 25 25 25 0 0 00:06:56.026 asserts 326 326 326 0 n/a 00:06:56.026 00:06:56.026 Elapsed time = 0.007 seconds 00:06:56.026 13:30:35 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:56.026 00:06:56.026 00:06:56.026 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.026 http://cunit.sourceforge.net/ 00:06:56.026 00:06:56.026 00:06:56.026 Suite: nvme_ctrlr 00:06:56.026 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-10 13:30:35.316447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-10 13:30:35.318231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-10 13:30:35.319555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-10 13:30:35.320859] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-10 13:30:35.322163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 [2024-07-10 13:30:35.323330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:30:35.324547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:30:35.325724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:56.026 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-10 13:30:35.328180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 [2024-07-10 13:30:35.330433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:30:35.331601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:56.026 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-10 13:30:35.334030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 [2024-07-10 13:30:35.335240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-10 13:30:35.337594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:56.026 Test: test_nvme_ctrlr_init_delay ...[2024-07-10 13:30:35.340159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 passed 00:06:56.026 Test: test_alloc_io_qpair_rr_1 ...[2024-07-10 13:30:35.341587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 [2024-07-10 13:30:35.341746] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:56.026 [2024-07-10 13:30:35.342009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:56.026 [2024-07-10 13:30:35.342125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:56.026 [2024-07-10 13:30:35.342214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:56.026 passed 00:06:56.026 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:56.026 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:56.026 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-10 13:30:35.342669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 passed 00:06:56.026 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-10 13:30:35.343059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.026 [2024-07-10 13:30:35.343255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:56.026 passed 00:06:56.026 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-10 13:30:35.343749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:56.026 [2024-07-10 13:30:35.344047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:56.026 [2024-07-10 13:30:35.344315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:56.026 [2024-07-10 13:30:35.344509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_fail ...[2024-07-10 13:30:35.344806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:56.026 passed 00:06:56.026 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:56.026 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:56.026 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:56.026 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-10 13:30:35.345745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:56.287 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:56.287 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:56.287 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-10 13:30:35.566108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-10 13:30:35.572881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-10 13:30:35.574068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 [2024-07-10 13:30:35.574117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:56.287 passed 00:06:56.287 Test: test_alloc_io_qpair_fail ...[2024-07-10 13:30:35.575247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 [2024-07-10 13:30:35.575317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:56.287 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:56.287 Test: test_nvme_ctrlr_set_state ...[2024-07-10 13:30:35.575547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-10 13:30:35.575632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-10 13:30:35.591938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-10 13:30:35.619972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_reset ...[2024-07-10 13:30:35.621404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_aer_callback ...[2024-07-10 13:30:35.621706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-10 13:30:35.623049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:56.287 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:56.287 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-10 13:30:35.624623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:56.287 Test: test_nvme_ctrlr_ana_resize ...[2024-07-10 13:30:35.625932] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:56.287 Test: test_nvme_transport_ctrlr_ready ...[2024-07-10 13:30:35.627433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:56.287 [2024-07-10 13:30:35.627489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:56.287 passed 00:06:56.287 Test: test_nvme_ctrlr_disable ...[2024-07-10 13:30:35.627596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:56.287 passed 00:06:56.287 00:06:56.287 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.287 suites 1 1 n/a 0 0 00:06:56.287 tests 43 43 43 0 0 00:06:56.287 asserts 10418 10418 10418 0 n/a 00:06:56.287 00:06:56.287 Elapsed time = 0.269 seconds 00:06:56.548 13:30:35 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:56.548 00:06:56.548 00:06:56.548 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.548 http://cunit.sourceforge.net/ 00:06:56.548 00:06:56.548 00:06:56.548 Suite: nvme_ctrlr_cmd 00:06:56.548 Test: test_get_log_pages ...passed 00:06:56.548 Test: test_set_feature_cmd ...passed 00:06:56.548 Test: test_set_feature_ns_cmd ...passed 00:06:56.548 Test: test_get_feature_cmd ...passed 00:06:56.548 Test: test_get_feature_ns_cmd ...passed 00:06:56.548 Test: test_abort_cmd ...passed 00:06:56.548 Test: test_set_host_id_cmds ...[2024-07-10 13:30:35.691207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:56.548 passed 00:06:56.548 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:56.548 Test: test_io_raw_cmd ...passed 00:06:56.548 Test: test_io_raw_cmd_with_md ...passed 00:06:56.548 Test: test_namespace_attach ...passed 00:06:56.548 Test: test_namespace_detach ...passed 00:06:56.548 Test: test_namespace_create ...passed 00:06:56.548 Test: test_namespace_delete ...passed 00:06:56.548 Test: test_doorbell_buffer_config ...passed 00:06:56.548 Test: test_format_nvme ...passed 00:06:56.548 Test: test_fw_commit ...passed 00:06:56.548 Test: test_fw_image_download ...passed 00:06:56.548 Test: test_sanitize ...passed 00:06:56.548 Test: test_directive ...passed 00:06:56.548 Test: test_nvme_request_add_abort ...passed 00:06:56.548 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:56.548 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:56.548 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:56.548 00:06:56.548 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.548 suites 1 1 n/a 0 0 00:06:56.548 tests 24 24 24 0 0 00:06:56.548 asserts 198 198 198 0 n/a 00:06:56.548 00:06:56.548 Elapsed time = 0.001 seconds 00:06:56.548 13:30:35 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:56.548 00:06:56.548 00:06:56.548 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.548 http://cunit.sourceforge.net/ 00:06:56.548 00:06:56.548 00:06:56.548 Suite: nvme_ctrlr_cmd 00:06:56.548 Test: test_geometry_cmd ...passed 00:06:56.548 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:56.548 00:06:56.548 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.548 suites 1 1 n/a 0 0 00:06:56.548 tests 2 2 2 0 0 00:06:56.548 asserts 7 7 7 0 n/a 00:06:56.548 00:06:56.548 Elapsed time = 0.000 seconds 00:06:56.548 13:30:35 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:56.548 00:06:56.548 00:06:56.548 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.548 http://cunit.sourceforge.net/ 00:06:56.548 00:06:56.548 00:06:56.548 Suite: nvme 00:06:56.548 Test: test_nvme_ns_construct ...passed 00:06:56.548 Test: test_nvme_ns_uuid ...passed 00:06:56.548 Test: test_nvme_ns_csi ...passed 00:06:56.548 Test: test_nvme_ns_data ...passed 00:06:56.548 Test: test_nvme_ns_set_identify_data ...passed 00:06:56.548 Test: test_spdk_nvme_ns_get_values ...passed 00:06:56.548 Test: test_spdk_nvme_ns_is_active ...passed 00:06:56.548 Test: spdk_nvme_ns_supports ...passed 00:06:56.548 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:56.548 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:56.548 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:56.548 Test: test_nvme_ns_find_id_desc ...passed 00:06:56.548 00:06:56.548 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.548 suites 1 1 n/a 0 0 00:06:56.548 tests 12 12 12 0 0 00:06:56.548 asserts 83 83 83 0 n/a 00:06:56.548 00:06:56.548 Elapsed time = 0.001 seconds 00:06:56.548 13:30:35 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:56.548 00:06:56.548 00:06:56.548 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.548 http://cunit.sourceforge.net/ 00:06:56.548 00:06:56.548 00:06:56.548 Suite: nvme_ns_cmd 00:06:56.548 Test: split_test ...passed 00:06:56.548 Test: split_test2 ...passed 00:06:56.548 Test: split_test3 ...passed 00:06:56.548 Test: split_test4 ...passed 00:06:56.548 Test: test_nvme_ns_cmd_flush ...passed 00:06:56.548 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:56.548 Test: test_nvme_ns_cmd_copy ...passed 00:06:56.548 Test: test_io_flags ...[2024-07-10 13:30:35.825599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:56.548 passed 00:06:56.548 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:56.548 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:56.548 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:56.548 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:56.548 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:56.548 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:56.548 Test: test_cmd_child_request ...passed 00:06:56.548 Test: test_nvme_ns_cmd_readv ...passed 00:06:56.548 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:56.548 Test: test_nvme_ns_cmd_writev ...[2024-07-10 13:30:35.828129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:56.548 passed 00:06:56.548 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:56.548 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:56.548 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:56.548 Test: test_nvme_ns_cmd_comparev ...passed 00:06:56.548 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:56.548 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:56.548 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:56.548 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:56.548 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:56.548 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-10 13:30:35.830771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:56.548 passed 00:06:56.548 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-10 13:30:35.830965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:56.548 passed 00:06:56.548 Test: test_nvme_ns_cmd_verify ...passed 00:06:56.548 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:56.548 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:56.548 00:06:56.548 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.548 suites 1 1 n/a 0 0 00:06:56.548 tests 32 32 32 0 0 00:06:56.548 asserts 550 550 550 0 n/a 00:06:56.548 00:06:56.548 Elapsed time = 0.006 seconds 00:06:56.548 13:30:35 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:56.548 00:06:56.548 00:06:56.548 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.548 http://cunit.sourceforge.net/ 00:06:56.548 00:06:56.548 00:06:56.548 Suite: nvme_ns_cmd 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:56.548 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:56.548 00:06:56.548 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.549 suites 1 1 n/a 0 0 00:06:56.549 tests 12 12 12 0 0 00:06:56.549 asserts 123 123 123 0 n/a 00:06:56.549 00:06:56.549 Elapsed time = 0.002 seconds 00:06:56.549 13:30:35 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:56.809 00:06:56.809 00:06:56.809 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.809 http://cunit.sourceforge.net/ 00:06:56.809 00:06:56.809 00:06:56.809 Suite: nvme_qpair 00:06:56.809 Test: test3 ...passed 00:06:56.809 Test: test_ctrlr_failed ...passed 00:06:56.809 Test: struct_packing ...passed 00:06:56.809 Test: test_nvme_qpair_process_completions ...[2024-07-10 13:30:35.923629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:56.809 [2024-07-10 13:30:35.924146] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:56.809 [2024-07-10 13:30:35.924287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:56.809 [2024-07-10 13:30:35.924440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:56.809 passed 00:06:56.809 Test: test_nvme_completion_is_retry ...passed 00:06:56.809 Test: test_get_status_string ...passed 00:06:56.809 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:06:56.809 Test: test_nvme_qpair_submit_request ...passed 00:06:56.809 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:56.809 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:56.809 Test: test_nvme_qpair_init_deinit ...[2024-07-10 13:30:35.925539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:56.809 passed 00:06:56.809 Test: test_nvme_get_sgl_print_info ...passed 00:06:56.809 00:06:56.809 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.809 suites 1 1 n/a 0 0 00:06:56.809 tests 12 12 12 0 0 00:06:56.809 asserts 154 154 154 0 n/a 00:06:56.809 00:06:56.809 Elapsed time = 0.002 seconds 00:06:56.809 13:30:35 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:56.809 00:06:56.809 00:06:56.809 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.809 http://cunit.sourceforge.net/ 00:06:56.809 00:06:56.809 00:06:56.809 Suite: nvme_pcie 00:06:56.809 Test: test_prp_list_append ...[2024-07-10 13:30:35.969926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:56.809 [2024-07-10 13:30:35.970560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:56.809 [2024-07-10 13:30:35.970702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:56.809 [2024-07-10 13:30:35.971153] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:56.809 [2024-07-10 13:30:35.971299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:56.809 passed 00:06:56.809 Test: test_nvme_pcie_hotplug_monitor ...passed 00:06:56.809 Test: test_shadow_doorbell_update ...passed 00:06:56.809 Test: test_build_contig_hw_sgl_request ...passed 00:06:56.809 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:56.809 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:56.809 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:56.809 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-10 13:30:35.971999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:56.809 passed 00:06:56.809 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:56.809 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:56.809 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-10 13:30:35.972394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:56.809 passed 00:06:56.809 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-10 13:30:35.972588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:56.809 passed 00:06:56.809 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-10 13:30:35.972735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:56.809 passed 00:06:56.809 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-10 13:30:35.972891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:56.809 passed 00:06:56.809 00:06:56.809 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.809 suites 1 1 n/a 0 0 00:06:56.809 tests 14 14 14 0 0 00:06:56.809 asserts 235 235 235 0 n/a 00:06:56.809 00:06:56.809 Elapsed time = 0.002 seconds 00:06:56.809 13:30:35 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:56.809 00:06:56.809 00:06:56.809 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.809 http://cunit.sourceforge.net/ 00:06:56.809 00:06:56.809 00:06:56.809 Suite: nvme_ns_cmd 00:06:56.810 Test: nvme_poll_group_create_test ...passed 00:06:56.810 Test: nvme_poll_group_add_remove_test ...passed 00:06:56.810 Test: nvme_poll_group_process_completions ...passed 00:06:56.810 Test: nvme_poll_group_destroy_test ...passed 00:06:56.810 Test: nvme_poll_group_get_free_stats ...passed 00:06:56.810 00:06:56.810 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.810 suites 1 1 n/a 0 0 00:06:56.810 tests 5 5 5 0 0 00:06:56.810 asserts 75 75 75 0 n/a 00:06:56.810 00:06:56.810 Elapsed time = 0.001 seconds 00:06:56.810 13:30:36 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:56.810 00:06:56.810 00:06:56.810 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.810 http://cunit.sourceforge.net/ 00:06:56.810 00:06:56.810 00:06:56.810 Suite: nvme_quirks 00:06:56.810 Test: test_nvme_quirks_striping ...passed 00:06:56.810 00:06:56.810 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.810 suites 1 1 n/a 0 0 00:06:56.810 tests 1 1 1 0 0 00:06:56.810 asserts 5 5 5 0 n/a 00:06:56.810 00:06:56.810 Elapsed time = 0.000 seconds 00:06:56.810 13:30:36 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:56.810 00:06:56.810 00:06:56.810 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.810 http://cunit.sourceforge.net/ 00:06:56.810 00:06:56.810 00:06:56.810 Suite: nvme_tcp 00:06:56.810 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:56.810 Test: test_nvme_tcp_build_iovs ...passed 00:06:56.810 Test: test_nvme_tcp_build_sgl_request ...[2024-07-10 13:30:36.101625] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd00dbfb40, and the iovcnt=16, remaining_size=28672 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:56.810 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:56.810 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:56.810 Test: test_nvme_tcp_req_get ...passed 00:06:56.810 Test: test_nvme_tcp_req_init ...passed 00:06:56.810 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:56.810 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:56.810 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-10 13:30:36.103212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc1860 is same with the state(6) to be set 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_alloc_reqs ...passed 00:06:56.810 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-10 13:30:36.103662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc09f0 is same with the state(5) to be set 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-10 13:30:36.103808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd00dc1520 00:06:56.810 [2024-07-10 13:30:36.103903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:56.810 [2024-07-10 13:30:36.104015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:56.810 [2024-07-10 13:30:36.104224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:56.810 [2024-07-10 13:30:36.104352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104569] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.104705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0eb0 is same with the state(5) to be set 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-10 13:30:36.104944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:56.810 [2024-07-10 13:30:36.105021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:56.810 [2024-07-10 13:30:36.105271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:56.810 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-10 13:30:36.105556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd00dc1060): PDU Sequence Error 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_icresp_handle ...[2024-07-10 13:30:36.105761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:56.810 [2024-07-10 13:30:36.105819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:56.810 [2024-07-10 13:30:36.105885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0a00 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.105947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:56.810 [2024-07-10 13:30:36.106012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0a00 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.106086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dc0a00 is same with the state(0) to be set 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-10 13:30:36.106226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd00dc1520): PDU Sequence Error 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-10 13:30:36.106378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd00dbfce0 00:06:56.810 passed 00:06:56.810 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:56.810 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-10 13:30:36.106691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd00dbf360, errno=0, rc=0 00:06:56.810 [2024-07-10 13:30:36.106781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dbf360 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.106871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd00dbf360 is same with the state(5) to be set 00:06:56.810 [2024-07-10 13:30:36.106940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd00dbf360 (0): Success 00:06:56.810 [2024-07-10 13:30:36.107009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd00dbf360 (0): Success 00:06:56.810 passed 00:06:57.070 Test: test_nvme_tcp_ctrlr_create_io_qpair ...passed 00:06:57.070 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-07-10 13:30:36.179158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:57.070 [2024-07-10 13:30:36.179327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:57.070 passed 00:06:57.070 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-10 13:30:36.179542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:57.070 [2024-07-10 13:30:36.179583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:57.070 passed 00:06:57.070 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-10 13:30:36.179761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:57.070 [2024-07-10 13:30:36.179803] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:57.070 [2024-07-10 13:30:36.179883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:57.070 [2024-07-10 13:30:36.179931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:57.070 [2024-07-10 13:30:36.180011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:57.070 [2024-07-10 13:30:36.180068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:57.070 passed 00:06:57.070 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-10 13:30:36.180218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:57.070 [2024-07-10 13:30:36.180262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:57.070 passed 00:06:57.070 00:06:57.070 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.070 suites 1 1 n/a 0 0 00:06:57.070 tests 27 27 27 0 0 00:06:57.070 asserts 624 624 624 0 n/a 00:06:57.070 00:06:57.070 Elapsed time = 0.077 seconds 00:06:57.070 13:30:36 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:57.070 00:06:57.070 00:06:57.070 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.070 http://cunit.sourceforge.net/ 00:06:57.070 00:06:57.070 00:06:57.070 Suite: nvme_transport 00:06:57.070 Test: test_nvme_get_transport ...passed 00:06:57.070 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:57.070 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:57.070 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:57.070 Test: test_ctrlr_get_memory_domains ...passed 00:06:57.070 00:06:57.070 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.070 suites 1 1 n/a 0 0 00:06:57.070 tests 5 5 5 0 0 00:06:57.070 asserts 28 28 28 0 n/a 00:06:57.070 00:06:57.070 Elapsed time = 0.000 seconds 00:06:57.070 13:30:36 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:57.070 00:06:57.070 00:06:57.070 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.070 http://cunit.sourceforge.net/ 00:06:57.070 00:06:57.070 00:06:57.070 Suite: nvme_io_msg 00:06:57.070 Test: test_nvme_io_msg_send ...passed 00:06:57.070 Test: test_nvme_io_msg_process ...passed 00:06:57.070 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:57.070 00:06:57.070 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.070 suites 1 1 n/a 0 0 00:06:57.070 tests 3 3 3 0 0 00:06:57.070 asserts 56 56 56 0 n/a 00:06:57.070 00:06:57.070 Elapsed time = 0.000 seconds 00:06:57.070 13:30:36 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:57.070 00:06:57.070 00:06:57.070 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.070 http://cunit.sourceforge.net/ 00:06:57.070 00:06:57.070 00:06:57.070 Suite: nvme_pcie_common 00:06:57.070 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-10 13:30:36.304359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:57.070 passed 00:06:57.070 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:57.070 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:57.070 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-10 13:30:36.305712] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:57.070 [2024-07-10 13:30:36.305868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:57.070 [2024-07-10 13:30:36.305930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:57.070 passed 00:06:57.070 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:06:57.070 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-10 13:30:36.306430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:57.070 [2024-07-10 13:30:36.306509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:57.070 passed 00:06:57.070 00:06:57.070 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.070 suites 1 1 n/a 0 0 00:06:57.070 tests 6 6 6 0 0 00:06:57.070 asserts 148 148 148 0 n/a 00:06:57.070 00:06:57.070 Elapsed time = 0.002 seconds 00:06:57.071 13:30:36 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:57.071 00:06:57.071 00:06:57.071 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.071 http://cunit.sourceforge.net/ 00:06:57.071 00:06:57.071 00:06:57.071 Suite: nvme_fabric 00:06:57.071 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:57.071 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:57.071 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:57.071 Test: test_nvme_fabric_discover_probe ...passed 00:06:57.071 Test: test_nvme_fabric_qpair_connect ...[2024-07-10 13:30:36.351445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:57.071 passed 00:06:57.071 00:06:57.071 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.071 suites 1 1 n/a 0 0 00:06:57.071 tests 5 5 5 0 0 00:06:57.071 asserts 60 60 60 0 n/a 00:06:57.071 00:06:57.071 Elapsed time = 0.001 seconds 00:06:57.071 13:30:36 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:57.071 00:06:57.071 00:06:57.071 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.071 http://cunit.sourceforge.net/ 00:06:57.071 00:06:57.071 00:06:57.071 Suite: nvme_opal 00:06:57.071 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:57.071 Test: test_opal_add_short_atom_header ...[2024-07-10 13:30:36.395561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:57.071 passed 00:06:57.071 00:06:57.071 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.071 suites 1 1 n/a 0 0 00:06:57.071 tests 2 2 2 0 0 00:06:57.071 asserts 22 22 22 0 n/a 00:06:57.071 00:06:57.071 Elapsed time = 0.001 seconds 00:06:57.071 00:06:57.071 real 0m1.288s 00:06:57.071 user 0m0.649s 00:06:57.071 sys 0m0.472s 00:06:57.071 13:30:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.071 13:30:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.071 ************************************ 00:06:57.071 END TEST unittest_nvme 00:06:57.071 ************************************ 00:06:57.330 13:30:36 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:57.330 13:30:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:57.330 13:30:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.330 13:30:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.330 ************************************ 00:06:57.330 START TEST unittest_log 00:06:57.330 ************************************ 00:06:57.330 13:30:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:57.330 00:06:57.330 00:06:57.330 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.330 http://cunit.sourceforge.net/ 00:06:57.330 00:06:57.330 00:06:57.330 Suite: log 00:06:57.330 Test: log_test ...[2024-07-10 13:30:36.503596] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:57.330 [2024-07-10 13:30:36.503995] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:57.330 log dump test: 00:06:57.330 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:57.330 spdk dump test: 00:06:57.330 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:57.330 spdk dump test: 00:06:57.330 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:57.330 00000010 65 20 63 68 61 72 73 e chars 00:06:57.330 passed 00:06:58.342 Test: deprecation ...passed 00:06:58.342 00:06:58.342 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.342 suites 1 1 n/a 0 0 00:06:58.342 tests 2 2 2 0 0 00:06:58.342 asserts 73 73 73 0 n/a 00:06:58.342 00:06:58.342 Elapsed time = 0.001 seconds 00:06:58.342 00:06:58.342 real 0m1.044s 00:06:58.342 user 0m0.024s 00:06:58.342 sys 0m0.020s 00:06:58.342 13:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.342 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.342 ************************************ 00:06:58.342 END TEST unittest_log 00:06:58.342 ************************************ 00:06:58.342 13:30:37 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:58.342 13:30:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.342 13:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.342 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.342 ************************************ 00:06:58.342 START TEST unittest_lvol 00:06:58.342 ************************************ 00:06:58.342 13:30:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:58.342 00:06:58.342 00:06:58.342 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.342 http://cunit.sourceforge.net/ 00:06:58.342 00:06:58.342 00:06:58.342 Suite: lvol 00:06:58.342 Test: lvs_init_unload_success ...[2024-07-10 13:30:37.621300] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:58.342 passed 00:06:58.342 Test: lvs_init_destroy_success ...[2024-07-10 13:30:37.622358] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:58.342 passed 00:06:58.342 Test: lvs_init_opts_success ...passed 00:06:58.342 Test: lvs_unload_lvs_is_null_fail ...[2024-07-10 13:30:37.622883] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:58.342 passed 00:06:58.342 Test: lvs_names ...[2024-07-10 13:30:37.623129] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:58.342 [2024-07-10 13:30:37.623296] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:58.342 [2024-07-10 13:30:37.623593] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:58.342 passed 00:06:58.342 Test: lvol_create_destroy_success ...passed 00:06:58.342 Test: lvol_create_fail ...[2024-07-10 13:30:37.624435] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:58.342 [2024-07-10 13:30:37.624671] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:58.342 passed 00:06:58.342 Test: lvol_destroy_fail ...[2024-07-10 13:30:37.625201] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:58.342 passed 00:06:58.342 Test: lvol_close ...[2024-07-10 13:30:37.625583] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:58.342 [2024-07-10 13:30:37.625817] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:58.342 passed 00:06:58.342 Test: lvol_resize ...passed 00:06:58.342 Test: lvol_set_read_only ...passed 00:06:58.342 Test: test_lvs_load ...[2024-07-10 13:30:37.626950] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:58.342 [2024-07-10 13:30:37.627119] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:58.342 passed 00:06:58.342 Test: lvols_load ...[2024-07-10 13:30:37.627532] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:58.342 [2024-07-10 13:30:37.627772] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:58.342 passed 00:06:58.342 Test: lvol_open ...passed 00:06:58.342 Test: lvol_snapshot ...passed 00:06:58.342 Test: lvol_snapshot_fail ...[2024-07-10 13:30:37.628861] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:58.342 passed 00:06:58.342 Test: lvol_clone ...passed 00:06:58.342 Test: lvol_clone_fail ...[2024-07-10 13:30:37.629712] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:58.342 passed 00:06:58.342 Test: lvol_iter_clones ...passed 00:06:58.343 Test: lvol_refcnt ...[2024-07-10 13:30:37.630500] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 5b7722b7-6500-42e4-b7a9-f16612ae5d2b because it is still open 00:06:58.343 passed 00:06:58.343 Test: lvol_names ...[2024-07-10 13:30:37.630769] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:58.343 [2024-07-10 13:30:37.630914] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:58.343 [2024-07-10 13:30:37.631172] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:58.343 passed 00:06:58.343 Test: lvol_create_thin_provisioned ...passed 00:06:58.343 Test: lvol_rename ...[2024-07-10 13:30:37.631653] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:58.343 [2024-07-10 13:30:37.631802] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:58.343 passed 00:06:58.343 Test: lvs_rename ...[2024-07-10 13:30:37.632103] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:58.343 passed 00:06:58.343 Test: lvol_inflate ...[2024-07-10 13:30:37.632384] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:58.343 passed 00:06:58.343 Test: lvol_decouple_parent ...[2024-07-10 13:30:37.632684] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:58.343 passed 00:06:58.343 Test: lvol_get_xattr ...passed 00:06:58.343 Test: lvol_esnap_reload ...passed 00:06:58.343 Test: lvol_esnap_create_bad_args ...[2024-07-10 13:30:37.633244] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:58.343 [2024-07-10 13:30:37.633354] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:58.343 [2024-07-10 13:30:37.633460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:58.343 [2024-07-10 13:30:37.633629] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:58.343 [2024-07-10 13:30:37.633811] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:58.343 passed 00:06:58.343 Test: lvol_esnap_create_delete ...passed 00:06:58.343 Test: lvol_esnap_load_esnaps ...[2024-07-10 13:30:37.634180] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:58.343 passed 00:06:58.343 Test: lvol_esnap_missing ...[2024-07-10 13:30:37.634414] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:58.343 [2024-07-10 13:30:37.634528] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:58.343 passed 00:06:58.343 Test: lvol_esnap_hotplug ... 00:06:58.343 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:58.343 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:58.343 [2024-07-10 13:30:37.635160] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol ee3c552b-ab23-4072-8a42-fbbc17f8c704: failed to create esnap bs_dev: error -12 00:06:58.343 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:58.343 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:58.343 [2024-07-10 13:30:37.635449] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 5af32af7-0d84-4bc6-b8fe-36708963811b: failed to create esnap bs_dev: error -12 00:06:58.343 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:58.343 [2024-07-10 13:30:37.635660] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a68ca171-36fb-4f88-9805-808e6949d282: failed to create esnap bs_dev: error -12 00:06:58.343 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:58.343 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:58.343 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:58.343 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:58.343 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:58.343 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:58.343 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:58.343 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:58.343 passed 00:06:58.343 Test: lvol_get_by ...passed 00:06:58.343 00:06:58.343 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.343 suites 1 1 n/a 0 0 00:06:58.343 tests 34 34 34 0 0 00:06:58.343 asserts 1439 1439 1439 0 n/a 00:06:58.343 00:06:58.343 Elapsed time = 0.010 seconds 00:06:58.343 00:06:58.343 real 0m0.064s 00:06:58.343 user 0m0.034s 00:06:58.343 sys 0m0.025s 00:06:58.343 13:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.343 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.343 ************************************ 00:06:58.343 END TEST unittest_lvol 00:06:58.343 ************************************ 00:06:58.603 13:30:37 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:58.603 13:30:37 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:58.603 13:30:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.603 13:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.603 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.603 ************************************ 00:06:58.603 START TEST unittest_nvme_rdma 00:06:58.603 ************************************ 00:06:58.603 13:30:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:58.603 00:06:58.603 00:06:58.603 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.603 http://cunit.sourceforge.net/ 00:06:58.603 00:06:58.603 00:06:58.603 Suite: nvme_rdma 00:06:58.603 Test: test_nvme_rdma_build_sgl_request ...[2024-07-10 13:30:37.747501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:58.603 [2024-07-10 13:30:37.747993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:58.603 [2024-07-10 13:30:37.748203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:58.603 passed 00:06:58.603 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:58.603 Test: test_nvme_rdma_build_contig_request ...[2024-07-10 13:30:37.748518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:58.603 passed 00:06:58.603 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:58.603 Test: test_nvme_rdma_create_reqs ...[2024-07-10 13:30:37.748861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:58.603 passed 00:06:58.603 Test: test_nvme_rdma_create_rsps ...[2024-07-10 13:30:37.749406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:58.603 passed 00:06:58.604 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-10 13:30:37.749805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:58.604 [2024-07-10 13:30:37.749935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:58.604 passed 00:06:58.604 Test: test_nvme_rdma_poller_create ...passed 00:06:58.604 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-10 13:30:37.750336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:58.604 passed 00:06:58.604 Test: test_nvme_rdma_ctrlr_construct ...passed 00:06:58.604 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:58.604 Test: test_nvme_rdma_req_init ...passed 00:06:58.604 Test: test_nvme_rdma_validate_cm_event ...[2024-07-10 13:30:37.751105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:58.604 [2024-07-10 13:30:37.751217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:58.604 passed 00:06:58.604 Test: test_nvme_rdma_qpair_init ...passed 00:06:58.604 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:58.604 Test: test_nvme_rdma_memory_domain ...[2024-07-10 13:30:37.751743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:58.604 passed 00:06:58.604 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:58.604 Test: test_rdma_get_memory_translation ...[2024-07-10 13:30:37.752098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:58.604 [2024-07-10 13:30:37.752188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:58.604 passed 00:06:58.604 Test: test_get_rdma_qpair_from_wc ...passed 00:06:58.604 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:58.604 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-10 13:30:37.752436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:58.604 [2024-07-10 13:30:37.752509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:58.604 passed 00:06:58.604 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-10 13:30:37.752709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:58.604 [2024-07-10 13:30:37.752787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:58.604 [2024-07-10 13:30:37.752847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffedd64e6d0 on poll group 0x60b0000001a0 00:06:58.604 [2024-07-10 13:30:37.752916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:58.604 [2024-07-10 13:30:37.752985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:58.604 [2024-07-10 13:30:37.753043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffedd64e6d0 on poll group 0x60b0000001a0 00:06:58.604 [2024-07-10 13:30:37.753147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:58.604 passed 00:06:58.604 00:06:58.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.604 suites 1 1 n/a 0 0 00:06:58.604 tests 22 22 22 0 0 00:06:58.604 asserts 412 412 412 0 n/a 00:06:58.604 00:06:58.604 Elapsed time = 0.004 seconds 00:06:58.604 00:06:58.604 real 0m0.052s 00:06:58.604 user 0m0.032s 00:06:58.604 sys 0m0.019s 00:06:58.604 13:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.604 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.604 ************************************ 00:06:58.604 END TEST unittest_nvme_rdma 00:06:58.604 ************************************ 00:06:58.604 13:30:37 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:58.604 13:30:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.604 13:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.604 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.604 ************************************ 00:06:58.604 START TEST unittest_nvmf_transport 00:06:58.604 ************************************ 00:06:58.604 13:30:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:58.604 00:06:58.604 00:06:58.604 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.604 http://cunit.sourceforge.net/ 00:06:58.604 00:06:58.604 00:06:58.604 Suite: nvmf 00:06:58.604 Test: test_spdk_nvmf_transport_create ...[2024-07-10 13:30:37.865197] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:58.604 [2024-07-10 13:30:37.865561] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:58.604 [2024-07-10 13:30:37.865652] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:58.604 [2024-07-10 13:30:37.865797] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:58.604 passed 00:06:58.604 Test: test_nvmf_transport_poll_group_create ...passed 00:06:58.604 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-10 13:30:37.866173] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:58.604 [2024-07-10 13:30:37.866291] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:58.604 [2024-07-10 13:30:37.866347] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:58.604 passed 00:06:58.604 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:58.604 00:06:58.604 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.604 suites 1 1 n/a 0 0 00:06:58.604 tests 4 4 4 0 0 00:06:58.604 asserts 49 49 49 0 n/a 00:06:58.604 00:06:58.604 Elapsed time = 0.001 seconds 00:06:58.604 00:06:58.604 real 0m0.054s 00:06:58.604 user 0m0.037s 00:06:58.604 sys 0m0.017s 00:06:58.604 13:30:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.604 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.604 ************************************ 00:06:58.604 END TEST unittest_nvmf_transport 00:06:58.604 ************************************ 00:06:58.604 13:30:37 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:58.604 13:30:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.604 13:30:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.604 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.604 ************************************ 00:06:58.604 START TEST unittest_rdma 00:06:58.604 ************************************ 00:06:58.604 13:30:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:58.864 00:06:58.864 00:06:58.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.864 http://cunit.sourceforge.net/ 00:06:58.864 00:06:58.864 00:06:58.864 Suite: rdma_common 00:06:58.864 Test: test_spdk_rdma_pd ...[2024-07-10 13:30:37.979102] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:58.864 [2024-07-10 13:30:37.979785] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:58.864 passed 00:06:58.864 00:06:58.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.864 suites 1 1 n/a 0 0 00:06:58.864 tests 1 1 1 0 0 00:06:58.864 asserts 31 31 31 0 n/a 00:06:58.864 00:06:58.864 Elapsed time = 0.001 seconds 00:06:58.864 00:06:58.864 real 0m0.046s 00:06:58.864 user 0m0.025s 00:06:58.864 sys 0m0.022s 00:06:58.864 13:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.864 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 ************************************ 00:06:58.864 END TEST unittest_rdma 00:06:58.864 ************************************ 00:06:58.864 13:30:38 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:58.864 13:30:38 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:58.864 13:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.864 13:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.864 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 ************************************ 00:06:58.864 START TEST unittest_nvme_cuse 00:06:58.864 ************************************ 00:06:58.864 13:30:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:58.864 00:06:58.864 00:06:58.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.864 http://cunit.sourceforge.net/ 00:06:58.864 00:06:58.864 00:06:58.864 Suite: nvme_cuse 00:06:58.864 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:58.864 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:58.864 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:58.864 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:58.864 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:58.864 Test: test_cuse_nvme_submit_io ...[2024-07-10 13:30:38.093003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:58.864 passed 00:06:58.864 Test: test_cuse_nvme_reset ...[2024-07-10 13:30:38.093548] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:58.864 passed 00:06:58.864 Test: test_nvme_cuse_stop ...passed 00:06:58.864 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:58.864 00:06:58.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.864 suites 1 1 n/a 0 0 00:06:58.864 tests 9 9 9 0 0 00:06:58.864 asserts 121 121 121 0 n/a 00:06:58.864 00:06:58.864 Elapsed time = 0.002 seconds 00:06:58.864 00:06:58.864 real 0m0.047s 00:06:58.864 user 0m0.031s 00:06:58.864 sys 0m0.015s 00:06:58.864 13:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.864 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 ************************************ 00:06:58.864 END TEST unittest_nvme_cuse 00:06:58.865 ************************************ 00:06:58.865 13:30:38 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:58.865 13:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.865 13:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.865 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:58.865 ************************************ 00:06:58.865 START TEST unittest_nvmf 00:06:58.865 ************************************ 00:06:58.865 13:30:38 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:06:58.865 13:30:38 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:58.865 00:06:58.865 00:06:58.865 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.865 http://cunit.sourceforge.net/ 00:06:58.865 00:06:58.865 00:06:58.865 Suite: nvmf 00:06:58.865 Test: test_get_log_page ...[2024-07-10 13:30:38.206583] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:58.865 passed 00:06:58.865 Test: test_process_fabrics_cmd ...passed 00:06:58.865 Test: test_connect ...[2024-07-10 13:30:38.207809] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:58.865 [2024-07-10 13:30:38.207944] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:58.865 [2024-07-10 13:30:38.208023] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:58.865 [2024-07-10 13:30:38.208105] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:58.865 [2024-07-10 13:30:38.208227] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:58.865 [2024-07-10 13:30:38.208298] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:58.865 [2024-07-10 13:30:38.208419] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:58.865 [2024-07-10 13:30:38.208481] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:58.865 [2024-07-10 13:30:38.208619] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:58.865 [2024-07-10 13:30:38.208724] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:58.865 [2024-07-10 13:30:38.209038] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:58.865 [2024-07-10 13:30:38.209143] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:58.865 [2024-07-10 13:30:38.209267] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:58.865 [2024-07-10 13:30:38.209373] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:58.865 [2024-07-10 13:30:38.209506] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:58.865 [2024-07-10 13:30:38.209674] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:58.865 passed 00:06:58.865 Test: test_get_ns_id_desc_list ...passed 00:06:58.865 Test: test_identify_ns ...[2024-07-10 13:30:38.210044] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:58.865 [2024-07-10 13:30:38.210292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:58.865 [2024-07-10 13:30:38.210462] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:58.865 passed 00:06:58.865 Test: test_identify_ns_iocs_specific ...[2024-07-10 13:30:38.210682] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:58.865 [2024-07-10 13:30:38.211016] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:58.865 passed 00:06:58.865 Test: test_reservation_write_exclusive ...passed 00:06:58.865 Test: test_reservation_exclusive_access ...passed 00:06:58.865 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:58.865 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:58.865 Test: test_reservation_notification_log_page ...passed 00:06:58.865 Test: test_get_dif_ctx ...passed 00:06:58.865 Test: test_set_get_features ...[2024-07-10 13:30:38.211922] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:58.865 [2024-07-10 13:30:38.211999] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:58.865 [2024-07-10 13:30:38.212067] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:58.865 [2024-07-10 13:30:38.212164] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:58.865 passed 00:06:58.865 Test: test_identify_ctrlr ...passed 00:06:58.865 Test: test_identify_ctrlr_iocs_specific ...passed 00:06:58.865 Test: test_custom_admin_cmd ...passed 00:06:58.865 Test: test_fused_compare_and_write ...[2024-07-10 13:30:38.212939] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:58.865 [2024-07-10 13:30:38.213020] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:58.865 [2024-07-10 13:30:38.213095] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:58.865 passed 00:06:58.865 Test: test_multi_async_event_reqs ...passed 00:06:58.865 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:58.865 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:58.865 Test: test_multi_async_events ...passed 00:06:58.865 Test: test_rae ...passed 00:06:58.865 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:58.865 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:58.865 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-10 13:30:38.214052] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:58.865 passed 00:06:58.865 Test: test_zcopy_read ...passed 00:06:58.865 Test: test_zcopy_write ...passed 00:06:58.865 Test: test_nvmf_property_set ...passed 00:06:58.865 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-10 13:30:38.214457] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:58.865 [2024-07-10 13:30:38.214572] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:58.865 passed 00:06:58.865 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-10 13:30:38.214688] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:58.865 [2024-07-10 13:30:38.214774] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:58.865 [2024-07-10 13:30:38.214840] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:58.865 passed 00:06:58.865 00:06:58.865 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.865 suites 1 1 n/a 0 0 00:06:58.865 tests 30 30 30 0 0 00:06:58.865 asserts 885 885 885 0 n/a 00:06:58.865 00:06:58.865 Elapsed time = 0.006 seconds 00:06:59.125 13:30:38 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:59.125 00:06:59.125 00:06:59.125 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.125 http://cunit.sourceforge.net/ 00:06:59.125 00:06:59.125 00:06:59.125 Suite: nvmf 00:06:59.125 Test: test_get_rw_params ...passed 00:06:59.125 Test: test_lba_in_range ...passed 00:06:59.125 Test: test_get_dif_ctx ...passed 00:06:59.125 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:59.125 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-10 13:30:38.262057] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:59.125 [2024-07-10 13:30:38.262470] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:59.125 [2024-07-10 13:30:38.262638] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:59.125 passed 00:06:59.125 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-10 13:30:38.262830] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:59.125 [2024-07-10 13:30:38.262975] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:59.125 passed 00:06:59.125 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-10 13:30:38.263254] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:59.125 [2024-07-10 13:30:38.263337] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:59.125 [2024-07-10 13:30:38.263461] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:59.125 [2024-07-10 13:30:38.263540] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:59.125 passed 00:06:59.125 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:59.125 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:59.125 00:06:59.125 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.125 suites 1 1 n/a 0 0 00:06:59.125 tests 9 9 9 0 0 00:06:59.125 asserts 157 157 157 0 n/a 00:06:59.125 00:06:59.125 Elapsed time = 0.001 seconds 00:06:59.125 13:30:38 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:59.125 00:06:59.125 00:06:59.125 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.125 http://cunit.sourceforge.net/ 00:06:59.125 00:06:59.125 00:06:59.125 Suite: nvmf 00:06:59.125 Test: test_discovery_log ...passed 00:06:59.125 Test: test_discovery_log_with_filters ...passed 00:06:59.125 00:06:59.125 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.125 suites 1 1 n/a 0 0 00:06:59.125 tests 2 2 2 0 0 00:06:59.125 asserts 238 238 238 0 n/a 00:06:59.125 00:06:59.125 Elapsed time = 0.002 seconds 00:06:59.125 13:30:38 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:59.125 00:06:59.125 00:06:59.125 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.125 http://cunit.sourceforge.net/ 00:06:59.125 00:06:59.125 00:06:59.125 Suite: nvmf 00:06:59.125 Test: nvmf_test_create_subsystem ...[2024-07-10 13:30:38.356671] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:59.126 [2024-07-10 13:30:38.357177] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:59.126 [2024-07-10 13:30:38.357346] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:59.126 [2024-07-10 13:30:38.357441] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:59.126 [2024-07-10 13:30:38.357527] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:59.126 [2024-07-10 13:30:38.357617] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:59.126 [2024-07-10 13:30:38.357819] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:59.126 [2024-07-10 13:30:38.358091] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:59.126 [2024-07-10 13:30:38.358281] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:59.126 [2024-07-10 13:30:38.358375] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:59.126 [2024-07-10 13:30:38.358449] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:59.126 passed 00:06:59.126 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-10 13:30:38.358798] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:59.126 [2024-07-10 13:30:38.358922] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:59.126 passed 00:06:59.126 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:59.126 Test: test_reservation_register ...[2024-07-10 13:30:38.359281] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 [2024-07-10 13:30:38.359432] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:59.126 passed 00:06:59.126 Test: test_reservation_register_with_ptpl ...passed 00:06:59.126 Test: test_reservation_acquire_preempt_1 ...[2024-07-10 13:30:38.360508] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:59.126 Test: test_reservation_release ...[2024-07-10 13:30:38.362196] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_reservation_unregister_notification ...[2024-07-10 13:30:38.362510] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_reservation_release_notification ...[2024-07-10 13:30:38.362833] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_reservation_release_notification_write_exclusive ...[2024-07-10 13:30:38.363124] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_reservation_clear_notification ...[2024-07-10 13:30:38.363425] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_reservation_preempt_notification ...[2024-07-10 13:30:38.363686] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:59.126 passed 00:06:59.126 Test: test_spdk_nvmf_ns_event ...passed 00:06:59.126 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:59.126 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:59.126 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-10 13:30:38.364667] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:59.126 [2024-07-10 13:30:38.364789] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:59.126 passed 00:06:59.126 Test: test_nvmf_ns_reservation_report ...[2024-07-10 13:30:38.364990] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:59.126 passed 00:06:59.126 Test: test_nvmf_nqn_is_valid ...[2024-07-10 13:30:38.365139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:59.126 [2024-07-10 13:30:38.365214] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:99da80b6-bb7d-4dda-a1a4-7b9aef88f09": uuid is not the correct length 00:06:59.126 [2024-07-10 13:30:38.365271] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:59.126 passed 00:06:59.126 Test: test_nvmf_ns_reservation_restore ...[2024-07-10 13:30:38.365444] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:59.126 passed 00:06:59.126 Test: test_nvmf_subsystem_state_change ...passed 00:06:59.126 Test: test_nvmf_reservation_custom_ops ...passed 00:06:59.126 00:06:59.126 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.126 suites 1 1 n/a 0 0 00:06:59.126 tests 22 22 22 0 0 00:06:59.126 asserts 407 407 407 0 n/a 00:06:59.126 00:06:59.126 Elapsed time = 0.008 seconds 00:06:59.126 13:30:38 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:59.126 00:06:59.126 00:06:59.126 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.126 http://cunit.sourceforge.net/ 00:06:59.126 00:06:59.126 00:06:59.126 Suite: nvmf 00:06:59.126 Test: test_nvmf_tcp_create ...[2024-07-10 13:30:38.429131] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:59.126 passed 00:06:59.126 Test: test_nvmf_tcp_destroy ...passed 00:06:59.126 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:59.126 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:59.126 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:59.126 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:59.386 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:59.386 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-10 13:30:38.499090] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.499175] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 [2024-07-10 13:30:38.499244] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 [2024-07-10 13:30:38.499294] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.499324] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 passed 00:06:59.386 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:59.386 Test: test_nvmf_tcp_icreq_handle ...[2024-07-10 13:30:38.499484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:59.386 [2024-07-10 13:30:38.499571] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.499636] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 [2024-07-10 13:30:38.499676] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:59.386 [2024-07-10 13:30:38.499714] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 [2024-07-10 13:30:38.499751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.499790] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 [2024-07-10 13:30:38.499838] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.499900] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 passed 00:06:59.386 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:59.386 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-10 13:30:38.500056] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:59.386 [2024-07-10 13:30:38.500110] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.500152] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e3e50 is same with the state(5) to be set 00:06:59.386 passed 00:06:59.386 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-10 13:30:38.500240] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffd3d5e4bb0 00:06:59.386 [2024-07-10 13:30:38.500316] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.386 [2024-07-10 13:30:38.500371] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.500427] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffd3d5e4310 00:06:59.387 [2024-07-10 13:30:38.500469] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.500507] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.500545] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:59.387 [2024-07-10 13:30:38.500588] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.500651] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.500701] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:59.387 [2024-07-10 13:30:38.500743] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.500779] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.500818] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.500863] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.500931] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.500985] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.501044] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.501081] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.501124] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.501164] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.501214] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.501251] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 [2024-07-10 13:30:38.501299] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:59.387 [2024-07-10 13:30:38.501349] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd3d5e4310 is same with the state(5) to be set 00:06:59.387 passed 00:06:59.387 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:06:59.387 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-10 13:30:38.514970] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:59.387 [2024-07-10 13:30:38.515046] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:59.387 passed 00:06:59.387 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-10 13:30:38.515240] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:59.387 [2024-07-10 13:30:38.515289] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:59.387 passed 00:06:59.387 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-10 13:30:38.515452] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:59.387 [2024-07-10 13:30:38.515495] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:59.387 passed 00:06:59.387 00:06:59.387 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.387 suites 1 1 n/a 0 0 00:06:59.387 tests 17 17 17 0 0 00:06:59.387 asserts 222 222 222 0 n/a 00:06:59.387 00:06:59.387 Elapsed time = 0.104 seconds 00:06:59.387 13:30:38 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:59.387 00:06:59.387 00:06:59.387 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.387 http://cunit.sourceforge.net/ 00:06:59.387 00:06:59.387 00:06:59.387 Suite: nvmf 00:06:59.387 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:59.387 00:06:59.387 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.387 suites 1 1 n/a 0 0 00:06:59.387 tests 1 1 1 0 0 00:06:59.387 asserts 17 17 17 0 n/a 00:06:59.387 00:06:59.387 Elapsed time = 0.019 seconds 00:06:59.387 ************************************ 00:06:59.387 END TEST unittest_nvmf 00:06:59.387 ************************************ 00:06:59.387 00:06:59.387 real 0m0.486s 00:06:59.387 user 0m0.243s 00:06:59.387 sys 0m0.236s 00:06:59.387 13:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.387 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.387 13:30:38 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:59.387 13:30:38 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:59.387 13:30:38 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:59.387 13:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:59.387 13:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.387 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.387 ************************************ 00:06:59.387 START TEST unittest_nvmf_rdma 00:06:59.387 ************************************ 00:06:59.387 13:30:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:59.648 00:06:59.648 00:06:59.648 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.648 http://cunit.sourceforge.net/ 00:06:59.648 00:06:59.648 00:06:59.648 Suite: nvmf 00:06:59.648 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-10 13:30:38.755133] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:59.648 [2024-07-10 13:30:38.755501] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:59.648 [2024-07-10 13:30:38.755584] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:59.648 passed 00:06:59.648 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:59.648 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:59.648 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:59.648 Test: test_nvmf_rdma_opts_init ...passed 00:06:59.648 Test: test_nvmf_rdma_request_free_data ...passed 00:06:59.648 Test: test_nvmf_rdma_update_ibv_state ...[2024-07-10 13:30:38.757346] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:59.648 [2024-07-10 13:30:38.757434] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:59.648 passed 00:06:59.648 Test: test_nvmf_rdma_resources_create ...passed 00:06:59.648 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:59.648 Test: test_nvmf_rdma_resize_cq ...[2024-07-10 13:30:38.758903] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:59.648 Using CQ of insufficient size may lead to CQ overrun 00:06:59.648 [2024-07-10 13:30:38.759050] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:59.648 [2024-07-10 13:30:38.759130] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:59.648 passed 00:06:59.648 00:06:59.648 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.648 suites 1 1 n/a 0 0 00:06:59.648 tests 10 10 10 0 0 00:06:59.648 asserts 584 584 584 0 n/a 00:06:59.648 00:06:59.648 Elapsed time = 0.004 seconds 00:06:59.648 ************************************ 00:06:59.648 END TEST unittest_nvmf_rdma 00:06:59.648 ************************************ 00:06:59.648 00:06:59.648 real 0m0.046s 00:06:59.648 user 0m0.037s 00:06:59.648 sys 0m0.009s 00:06:59.648 13:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.648 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.648 13:30:38 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:59.648 13:30:38 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:59.648 13:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:59.648 13:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.648 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.648 ************************************ 00:06:59.648 START TEST unittest_scsi 00:06:59.648 ************************************ 00:06:59.648 13:30:38 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:06:59.648 13:30:38 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:59.648 00:06:59.648 00:06:59.648 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.648 http://cunit.sourceforge.net/ 00:06:59.648 00:06:59.648 00:06:59.648 Suite: dev_suite 00:06:59.648 Test: dev_destruct_null_dev ...passed 00:06:59.648 Test: dev_destruct_zero_luns ...passed 00:06:59.648 Test: dev_destruct_null_lun ...passed 00:06:59.648 Test: dev_destruct_success ...passed 00:06:59.648 Test: dev_construct_num_luns_zero ...[2024-07-10 13:30:38.851132] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:59.648 passed 00:06:59.648 Test: dev_construct_no_lun_zero ...[2024-07-10 13:30:38.851709] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:59.648 passed 00:06:59.649 Test: dev_construct_null_lun ...[2024-07-10 13:30:38.851889] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:59.649 passed 00:06:59.649 Test: dev_construct_name_too_long ...[2024-07-10 13:30:38.852062] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:59.649 passed 00:06:59.649 Test: dev_construct_success ...passed 00:06:59.649 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:59.649 Test: dev_queue_mgmt_task_success ...passed 00:06:59.649 Test: dev_queue_task_success ...passed 00:06:59.649 Test: dev_stop_success ...passed 00:06:59.649 Test: dev_add_port_max_ports ...[2024-07-10 13:30:38.852982] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:59.649 passed 00:06:59.649 Test: dev_add_port_construct_failure1 ...[2024-07-10 13:30:38.853234] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:59.649 passed 00:06:59.649 Test: dev_add_port_construct_failure2 ...[2024-07-10 13:30:38.853468] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:59.649 passed 00:06:59.649 Test: dev_add_port_success1 ...passed 00:06:59.649 Test: dev_add_port_success2 ...passed 00:06:59.649 Test: dev_add_port_success3 ...passed 00:06:59.649 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:59.649 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:59.649 Test: dev_find_port_by_id_success ...passed 00:06:59.649 Test: dev_add_lun_bdev_not_found ...passed 00:06:59.649 Test: dev_add_lun_no_free_lun_id ...[2024-07-10 13:30:38.854529] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:59.649 passed 00:06:59.649 Test: dev_add_lun_success1 ...passed 00:06:59.649 Test: dev_add_lun_success2 ...passed 00:06:59.649 Test: dev_check_pending_tasks ...passed 00:06:59.649 Test: dev_iterate_luns ...passed 00:06:59.649 Test: dev_find_free_lun ...passed 00:06:59.649 00:06:59.649 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.649 suites 1 1 n/a 0 0 00:06:59.649 tests 29 29 29 0 0 00:06:59.649 asserts 97 97 97 0 n/a 00:06:59.649 00:06:59.649 Elapsed time = 0.003 seconds 00:06:59.649 13:30:38 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:59.649 00:06:59.649 00:06:59.649 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.649 http://cunit.sourceforge.net/ 00:06:59.649 00:06:59.649 00:06:59.649 Suite: lun_suite 00:06:59.649 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-10 13:30:38.905892] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:59.649 passed 00:06:59.649 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-10 13:30:38.906309] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:59.649 passed 00:06:59.649 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:59.649 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:59.649 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-10 13:30:38.906667] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:59.649 passed 00:06:59.649 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:59.649 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:59.649 Test: lun_append_task_null_lun_not_supported ...passed 00:06:59.649 Test: lun_execute_scsi_task_pending ...passed 00:06:59.649 Test: lun_execute_scsi_task_complete ...passed 00:06:59.649 Test: lun_execute_scsi_task_resize ...passed 00:06:59.649 Test: lun_destruct_success ...passed 00:06:59.649 Test: lun_construct_null_ctx ...[2024-07-10 13:30:38.907320] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:59.649 passed 00:06:59.649 Test: lun_construct_success ...passed 00:06:59.649 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:59.649 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:59.649 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:59.649 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:59.649 00:06:59.649 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.649 suites 1 1 n/a 0 0 00:06:59.649 tests 18 18 18 0 0 00:06:59.649 asserts 153 153 153 0 n/a 00:06:59.649 00:06:59.649 Elapsed time = 0.001 seconds 00:06:59.649 13:30:38 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:59.649 00:06:59.649 00:06:59.649 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.649 http://cunit.sourceforge.net/ 00:06:59.649 00:06:59.649 00:06:59.649 Suite: scsi_suite 00:06:59.649 Test: scsi_init ...passed 00:06:59.649 00:06:59.649 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.649 suites 1 1 n/a 0 0 00:06:59.649 tests 1 1 1 0 0 00:06:59.649 asserts 1 1 1 0 n/a 00:06:59.649 00:06:59.649 Elapsed time = 0.000 seconds 00:06:59.649 13:30:38 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:59.649 00:06:59.649 00:06:59.649 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.649 http://cunit.sourceforge.net/ 00:06:59.649 00:06:59.649 00:06:59.649 Suite: translation_suite 00:06:59.649 Test: mode_select_6_test ...passed 00:06:59.649 Test: mode_select_6_test2 ...passed 00:06:59.649 Test: mode_sense_6_test ...passed 00:06:59.649 Test: mode_sense_10_test ...passed 00:06:59.649 Test: inquiry_evpd_test ...passed 00:06:59.649 Test: inquiry_standard_test ...passed 00:06:59.649 Test: inquiry_overflow_test ...passed 00:06:59.649 Test: task_complete_test ...passed 00:06:59.649 Test: lba_range_test ...passed 00:06:59.649 Test: xfer_len_test ...[2024-07-10 13:30:39.001120] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:59.649 passed 00:06:59.649 Test: xfer_test ...passed 00:06:59.649 Test: scsi_name_padding_test ...passed 00:06:59.649 Test: get_dif_ctx_test ...passed 00:06:59.649 Test: unmap_split_test ...passed 00:06:59.649 00:06:59.649 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.649 suites 1 1 n/a 0 0 00:06:59.649 tests 14 14 14 0 0 00:06:59.649 asserts 1200 1200 1200 0 n/a 00:06:59.649 00:06:59.649 Elapsed time = 0.003 seconds 00:06:59.909 13:30:39 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:59.909 00:06:59.909 00:06:59.909 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.909 http://cunit.sourceforge.net/ 00:06:59.909 00:06:59.909 00:06:59.909 Suite: reservation_suite 00:06:59.909 Test: test_reservation_register ...[2024-07-10 13:30:39.044010] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:59.909 passed 00:06:59.909 Test: test_reservation_reserve ...[2024-07-10 13:30:39.044787] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:59.909 [2024-07-10 13:30:39.044958] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:59.909 [2024-07-10 13:30:39.045149] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:59.909 passed 00:06:59.909 Test: test_reservation_preempt_non_all_regs ...[2024-07-10 13:30:39.045399] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:59.909 [2024-07-10 13:30:39.045565] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:59.909 passed 00:06:59.909 Test: test_reservation_preempt_all_regs ...[2024-07-10 13:30:39.045912] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:59.909 passed 00:06:59.909 Test: test_reservation_cmds_conflict ...[2024-07-10 13:30:39.046270] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:59.909 [2024-07-10 13:30:39.046429] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:59.909 [2024-07-10 13:30:39.046554] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:59.909 [2024-07-10 13:30:39.046631] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:59.909 [2024-07-10 13:30:39.046737] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:59.909 [2024-07-10 13:30:39.046812] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:59.909 passed 00:06:59.909 Test: test_scsi2_reserve_release ...passed 00:06:59.909 Test: test_pr_with_scsi2_reserve_release ...[2024-07-10 13:30:39.047213] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:59.909 passed 00:06:59.909 00:06:59.909 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.909 suites 1 1 n/a 0 0 00:06:59.909 tests 7 7 7 0 0 00:06:59.909 asserts 257 257 257 0 n/a 00:06:59.909 00:06:59.909 Elapsed time = 0.003 seconds 00:06:59.909 00:06:59.909 real 0m0.239s 00:06:59.909 user 0m0.154s 00:06:59.909 sys 0m0.075s 00:06:59.909 13:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.909 ************************************ 00:06:59.910 END TEST unittest_scsi 00:06:59.910 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:06:59.910 ************************************ 00:06:59.910 13:30:39 -- unit/unittest.sh@276 -- # uname -s 00:06:59.910 13:30:39 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:59.910 13:30:39 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:59.910 13:30:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:59.910 13:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.910 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:06:59.910 ************************************ 00:06:59.910 START TEST unittest_sock 00:06:59.910 ************************************ 00:06:59.910 13:30:39 -- common/autotest_common.sh@1104 -- # unittest_sock 00:06:59.910 13:30:39 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:59.910 00:06:59.910 00:06:59.910 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.910 http://cunit.sourceforge.net/ 00:06:59.910 00:06:59.910 00:06:59.910 Suite: sock 00:06:59.910 Test: posix_sock ...passed 00:06:59.910 Test: ut_sock ...passed 00:06:59.910 Test: posix_sock_group ...passed 00:06:59.910 Test: ut_sock_group ...passed 00:06:59.910 Test: posix_sock_group_fairness ...passed 00:06:59.910 Test: _posix_sock_close ...passed 00:06:59.910 Test: sock_get_default_opts ...passed 00:06:59.910 Test: ut_sock_impl_get_set_opts ...passed 00:06:59.910 Test: posix_sock_impl_get_set_opts ...passed 00:06:59.910 Test: ut_sock_map ...passed 00:06:59.910 Test: override_impl_opts ...passed 00:06:59.910 Test: ut_sock_group_get_ctx ...passed 00:06:59.910 00:06:59.910 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.910 suites 1 1 n/a 0 0 00:06:59.910 tests 12 12 12 0 0 00:06:59.910 asserts 349 349 349 0 n/a 00:06:59.910 00:06:59.910 Elapsed time = 0.006 seconds 00:06:59.910 13:30:39 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:59.910 00:06:59.910 00:06:59.910 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.910 http://cunit.sourceforge.net/ 00:06:59.910 00:06:59.910 00:06:59.910 Suite: posix 00:06:59.910 Test: flush ...passed 00:06:59.910 00:06:59.910 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.910 suites 1 1 n/a 0 0 00:06:59.910 tests 1 1 1 0 0 00:06:59.910 asserts 28 28 28 0 n/a 00:06:59.910 00:06:59.910 Elapsed time = 0.000 seconds 00:06:59.910 13:30:39 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:59.910 ************************************ 00:06:59.910 END TEST unittest_sock 00:06:59.910 ************************************ 00:06:59.910 00:06:59.910 real 0m0.099s 00:06:59.910 user 0m0.046s 00:06:59.910 sys 0m0.029s 00:06:59.910 13:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.910 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.169 13:30:39 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:00.169 13:30:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.169 13:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.169 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.169 ************************************ 00:07:00.169 START TEST unittest_thread 00:07:00.169 ************************************ 00:07:00.169 13:30:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:00.169 00:07:00.169 00:07:00.169 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.169 http://cunit.sourceforge.net/ 00:07:00.169 00:07:00.169 00:07:00.169 Suite: io_channel 00:07:00.169 Test: thread_alloc ...passed 00:07:00.169 Test: thread_send_msg ...passed 00:07:00.169 Test: thread_poller ...passed 00:07:00.169 Test: poller_pause ...passed 00:07:00.169 Test: thread_for_each ...passed 00:07:00.169 Test: for_each_channel_remove ...passed 00:07:00.169 Test: for_each_channel_unreg ...[2024-07-10 13:30:39.325759] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffd1ababb80 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:00.169 passed 00:07:00.169 Test: thread_name ...passed 00:07:00.169 Test: channel ...[2024-07-10 13:30:39.328700] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x560144daa0e0 00:07:00.169 passed 00:07:00.169 Test: channel_destroy_races ...passed 00:07:00.169 Test: thread_exit_test ...[2024-07-10 13:30:39.332264] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:00.169 passed 00:07:00.169 Test: thread_update_stats_test ...passed 00:07:00.169 Test: nested_channel ...passed 00:07:00.169 Test: device_unregister_and_thread_exit_race ...passed 00:07:00.169 Test: cache_closest_timed_poller ...passed 00:07:00.169 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:00.169 Test: io_device_lookup ...passed 00:07:00.169 Test: spdk_spin ...[2024-07-10 13:30:39.339041] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:00.169 [2024-07-10 13:30:39.339087] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd1ababb70 00:07:00.169 [2024-07-10 13:30:39.339157] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:00.169 [2024-07-10 13:30:39.340222] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:00.169 [2024-07-10 13:30:39.340287] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd1ababb70 00:07:00.169 [2024-07-10 13:30:39.340321] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:00.169 [2024-07-10 13:30:39.340360] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd1ababb70 00:07:00.170 [2024-07-10 13:30:39.340395] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:00.170 [2024-07-10 13:30:39.340437] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd1ababb70 00:07:00.170 [2024-07-10 13:30:39.340468] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:00.170 [2024-07-10 13:30:39.340511] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd1ababb70 00:07:00.170 passed 00:07:00.170 Test: for_each_channel_and_thread_exit_race ...passed 00:07:00.170 Test: for_each_thread_and_thread_exit_race ...passed 00:07:00.170 00:07:00.170 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.170 suites 1 1 n/a 0 0 00:07:00.170 tests 20 20 20 0 0 00:07:00.170 asserts 409 409 409 0 n/a 00:07:00.170 00:07:00.170 Elapsed time = 0.039 seconds 00:07:00.170 ************************************ 00:07:00.170 END TEST unittest_thread 00:07:00.170 ************************************ 00:07:00.170 00:07:00.170 real 0m0.093s 00:07:00.170 user 0m0.064s 00:07:00.170 sys 0m0.028s 00:07:00.170 13:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.170 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.170 13:30:39 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:00.170 13:30:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.170 13:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.170 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.170 ************************************ 00:07:00.170 START TEST unittest_iobuf 00:07:00.170 ************************************ 00:07:00.170 13:30:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:00.170 00:07:00.170 00:07:00.170 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.170 http://cunit.sourceforge.net/ 00:07:00.170 00:07:00.170 00:07:00.170 Suite: io_channel 00:07:00.170 Test: iobuf ...passed 00:07:00.170 Test: iobuf_cache ...[2024-07-10 13:30:39.457099] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:00.170 [2024-07-10 13:30:39.457436] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:00.170 [2024-07-10 13:30:39.457604] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:00.170 [2024-07-10 13:30:39.457675] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:00.170 [2024-07-10 13:30:39.457784] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:00.170 [2024-07-10 13:30:39.457855] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:00.170 passed 00:07:00.170 00:07:00.170 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.170 suites 1 1 n/a 0 0 00:07:00.170 tests 2 2 2 0 0 00:07:00.170 asserts 107 107 107 0 n/a 00:07:00.170 00:07:00.170 Elapsed time = 0.006 seconds 00:07:00.170 00:07:00.170 real 0m0.054s 00:07:00.170 user 0m0.016s 00:07:00.170 sys 0m0.038s 00:07:00.170 13:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.170 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.170 ************************************ 00:07:00.170 END TEST unittest_iobuf 00:07:00.170 ************************************ 00:07:00.429 13:30:39 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:07:00.429 13:30:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.429 13:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.429 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.429 ************************************ 00:07:00.429 START TEST unittest_util 00:07:00.429 ************************************ 00:07:00.429 13:30:39 -- common/autotest_common.sh@1104 -- # unittest_util 00:07:00.429 13:30:39 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:00.429 00:07:00.429 00:07:00.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.429 http://cunit.sourceforge.net/ 00:07:00.429 00:07:00.429 00:07:00.429 Suite: base64 00:07:00.429 Test: test_base64_get_encoded_strlen ...passed 00:07:00.429 Test: test_base64_get_decoded_len ...passed 00:07:00.429 Test: test_base64_encode ...passed 00:07:00.429 Test: test_base64_decode ...passed 00:07:00.429 Test: test_base64_urlsafe_encode ...passed 00:07:00.429 Test: test_base64_urlsafe_decode ...passed 00:07:00.429 00:07:00.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.429 suites 1 1 n/a 0 0 00:07:00.429 tests 6 6 6 0 0 00:07:00.429 asserts 112 112 112 0 n/a 00:07:00.429 00:07:00.429 Elapsed time = 0.000 seconds 00:07:00.429 13:30:39 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:00.429 00:07:00.429 00:07:00.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.429 http://cunit.sourceforge.net/ 00:07:00.429 00:07:00.429 00:07:00.429 Suite: bit_array 00:07:00.429 Test: test_1bit ...passed 00:07:00.429 Test: test_64bit ...passed 00:07:00.429 Test: test_find ...passed 00:07:00.429 Test: test_resize ...passed 00:07:00.429 Test: test_errors ...passed 00:07:00.429 Test: test_count ...passed 00:07:00.429 Test: test_mask_store_load ...passed 00:07:00.429 Test: test_mask_clear ...passed 00:07:00.429 00:07:00.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.429 suites 1 1 n/a 0 0 00:07:00.429 tests 8 8 8 0 0 00:07:00.429 asserts 5075 5075 5075 0 n/a 00:07:00.429 00:07:00.429 Elapsed time = 0.002 seconds 00:07:00.429 13:30:39 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:00.429 00:07:00.429 00:07:00.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.429 http://cunit.sourceforge.net/ 00:07:00.429 00:07:00.429 00:07:00.429 Suite: cpuset 00:07:00.429 Test: test_cpuset ...passed 00:07:00.429 Test: test_cpuset_parse ...[2024-07-10 13:30:39.657318] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:00.429 [2024-07-10 13:30:39.657758] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:00.429 [2024-07-10 13:30:39.657911] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:00.429 [2024-07-10 13:30:39.658053] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:00.429 [2024-07-10 13:30:39.658130] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:00.429 [2024-07-10 13:30:39.658219] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:00.429 [2024-07-10 13:30:39.658291] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:00.429 [2024-07-10 13:30:39.658391] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:00.429 passed 00:07:00.429 Test: test_cpuset_fmt ...passed 00:07:00.429 00:07:00.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.429 suites 1 1 n/a 0 0 00:07:00.429 tests 3 3 3 0 0 00:07:00.429 asserts 65 65 65 0 n/a 00:07:00.429 00:07:00.429 Elapsed time = 0.003 seconds 00:07:00.429 13:30:39 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:00.429 00:07:00.429 00:07:00.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.429 http://cunit.sourceforge.net/ 00:07:00.429 00:07:00.429 00:07:00.429 Suite: crc16 00:07:00.429 Test: test_crc16_t10dif ...passed 00:07:00.429 Test: test_crc16_t10dif_seed ...passed 00:07:00.429 Test: test_crc16_t10dif_copy ...passed 00:07:00.429 00:07:00.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.429 suites 1 1 n/a 0 0 00:07:00.429 tests 3 3 3 0 0 00:07:00.429 asserts 5 5 5 0 n/a 00:07:00.429 00:07:00.429 Elapsed time = 0.000 seconds 00:07:00.429 13:30:39 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:00.429 00:07:00.429 00:07:00.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.429 http://cunit.sourceforge.net/ 00:07:00.429 00:07:00.429 00:07:00.429 Suite: crc32_ieee 00:07:00.429 Test: test_crc32_ieee ...passed 00:07:00.429 00:07:00.429 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.429 suites 1 1 n/a 0 0 00:07:00.429 tests 1 1 1 0 0 00:07:00.429 asserts 1 1 1 0 n/a 00:07:00.429 00:07:00.429 Elapsed time = 0.000 seconds 00:07:00.429 13:30:39 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:00.429 00:07:00.429 00:07:00.429 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.429 http://cunit.sourceforge.net/ 00:07:00.429 00:07:00.429 00:07:00.429 Suite: crc32c 00:07:00.430 Test: test_crc32c ...passed 00:07:00.430 Test: test_crc32c_nvme ...passed 00:07:00.430 00:07:00.430 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.430 suites 1 1 n/a 0 0 00:07:00.430 tests 2 2 2 0 0 00:07:00.430 asserts 16 16 16 0 n/a 00:07:00.430 00:07:00.430 Elapsed time = 0.001 seconds 00:07:00.691 13:30:39 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:00.691 00:07:00.691 00:07:00.691 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.691 http://cunit.sourceforge.net/ 00:07:00.691 00:07:00.691 00:07:00.691 Suite: crc64 00:07:00.691 Test: test_crc64_nvme ...passed 00:07:00.691 00:07:00.691 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.691 suites 1 1 n/a 0 0 00:07:00.691 tests 1 1 1 0 0 00:07:00.691 asserts 4 4 4 0 n/a 00:07:00.691 00:07:00.691 Elapsed time = 0.001 seconds 00:07:00.691 13:30:39 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:00.691 00:07:00.691 00:07:00.691 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.691 http://cunit.sourceforge.net/ 00:07:00.691 00:07:00.691 00:07:00.691 Suite: string 00:07:00.691 Test: test_parse_ip_addr ...passed 00:07:00.691 Test: test_str_chomp ...passed 00:07:00.691 Test: test_parse_capacity ...passed 00:07:00.691 Test: test_sprintf_append_realloc ...passed 00:07:00.691 Test: test_strtol ...passed 00:07:00.691 Test: test_strtoll ...passed 00:07:00.691 Test: test_strarray ...passed 00:07:00.691 Test: test_strcpy_replace ...passed 00:07:00.691 00:07:00.691 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.691 suites 1 1 n/a 0 0 00:07:00.691 tests 8 8 8 0 0 00:07:00.691 asserts 161 161 161 0 n/a 00:07:00.691 00:07:00.691 Elapsed time = 0.001 seconds 00:07:00.691 13:30:39 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:00.691 00:07:00.691 00:07:00.691 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.691 http://cunit.sourceforge.net/ 00:07:00.691 00:07:00.691 00:07:00.691 Suite: dif 00:07:00.691 Test: dif_generate_and_verify_test ...[2024-07-10 13:30:39.900739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:00.691 [2024-07-10 13:30:39.901194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:00.691 [2024-07-10 13:30:39.901458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:00.691 [2024-07-10 13:30:39.901713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:00.691 [2024-07-10 13:30:39.901960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:00.691 [2024-07-10 13:30:39.902216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:00.691 passed 00:07:00.691 Test: dif_disable_check_test ...[2024-07-10 13:30:39.903147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:00.691 [2024-07-10 13:30:39.903462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:00.691 [2024-07-10 13:30:39.903719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:00.691 passed 00:07:00.691 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-10 13:30:39.904677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:00.691 [2024-07-10 13:30:39.904961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:00.691 [2024-07-10 13:30:39.905245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:00.691 [2024-07-10 13:30:39.905563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:00.691 [2024-07-10 13:30:39.905853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:00.691 [2024-07-10 13:30:39.906130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:00.691 [2024-07-10 13:30:39.906406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:00.691 [2024-07-10 13:30:39.906675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:00.691 [2024-07-10 13:30:39.906959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:00.691 [2024-07-10 13:30:39.907252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:00.691 [2024-07-10 13:30:39.907540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:00.691 passed 00:07:00.691 Test: dif_apptag_mask_test ...[2024-07-10 13:30:39.907871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:00.691 [2024-07-10 13:30:39.908159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:00.691 passed 00:07:00.691 Test: dif_sec_512_md_0_error_test ...[2024-07-10 13:30:39.908404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:00.691 passed 00:07:00.691 Test: dif_sec_4096_md_0_error_test ...[2024-07-10 13:30:39.908506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:00.691 [2024-07-10 13:30:39.908557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:00.691 passed 00:07:00.691 Test: dif_sec_4100_md_128_error_test ...[2024-07-10 13:30:39.908671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:00.691 [2024-07-10 13:30:39.908724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:00.691 passed 00:07:00.691 Test: dif_guard_seed_test ...passed 00:07:00.691 Test: dif_guard_value_test ...passed 00:07:00.691 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:00.691 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:00.691 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-10 13:30:39.937834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd5c, Actual=fd4c 00:07:00.691 [2024-07-10 13:30:39.939491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe31, Actual=fe21 00:07:00.691 [2024-07-10 13:30:39.941110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:07:00.691 [2024-07-10 13:30:39.942740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:07:00.691 [2024-07-10 13:30:39.944384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=100061 00:07:00.691 [2024-07-10 13:30:39.945990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=100061 00:07:00.691 [2024-07-10 13:30:39.947607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=43c4 00:07:00.691 [2024-07-10 13:30:39.949141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=5c78 00:07:00.692 [2024-07-10 13:30:39.950688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1aa753ed, Actual=1ab753ed 00:07:00.692 [2024-07-10 13:30:39.952315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38474660, Actual=38574660 00:07:00.692 [2024-07-10 13:30:39.953951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.955569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.957205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=100061 00:07:00.692 [2024-07-10 13:30:39.958822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=100061 00:07:00.692 [2024-07-10 13:30:39.960454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=1ff49485 00:07:00.692 [2024-07-10 13:30:39.961986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=65194d10 00:07:00.692 [2024-07-10 13:30:39.963531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728edc20d3, Actual=a576a7728ecc20d3 00:07:00.692 [2024-07-10 13:30:39.965153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4827a266, Actual=88010a2d4837a266 00:07:00.692 [2024-07-10 13:30:39.966758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.968382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.969993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:07:00.692 [2024-07-10 13:30:39.971620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:07:00.692 [2024-07-10 13:30:39.973263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=f70d9482b67db574 00:07:00.692 [2024-07-10 13:30:39.974862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=f728c0f7e20878f2 00:07:00.692 passed 00:07:00.692 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-10 13:30:39.975884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:00.692 [2024-07-10 13:30:39.976107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:00.692 [2024-07-10 13:30:39.976314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.976523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.976745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.976950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.977156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=43c4 00:07:00.692 [2024-07-10 13:30:39.977276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5c78 00:07:00.692 [2024-07-10 13:30:39.977401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:00.692 [2024-07-10 13:30:39.977603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:00.692 [2024-07-10 13:30:39.977819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.978028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.978236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.978438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.978646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1ff49485 00:07:00.692 [2024-07-10 13:30:39.978765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=65194d10 00:07:00.692 [2024-07-10 13:30:39.978904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728edc20d3, Actual=a576a7728ecc20d3 00:07:00.692 [2024-07-10 13:30:39.979112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4827a266, Actual=88010a2d4837a266 00:07:00.692 [2024-07-10 13:30:39.979320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.979522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.979729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.692 [2024-07-10 13:30:39.979930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.692 [2024-07-10 13:30:39.980152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f70d9482b67db574 00:07:00.692 [2024-07-10 13:30:39.980282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f728c0f7e20878f2 00:07:00.692 passed 00:07:00.692 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-10 13:30:39.980475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:00.692 [2024-07-10 13:30:39.980687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:00.692 [2024-07-10 13:30:39.980888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.981093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.981304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.981516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.981725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=43c4 00:07:00.692 [2024-07-10 13:30:39.981849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5c78 00:07:00.692 [2024-07-10 13:30:39.981971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:00.692 [2024-07-10 13:30:39.982177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:00.692 [2024-07-10 13:30:39.982378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.982589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.982796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.983006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.983211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1ff49485 00:07:00.692 [2024-07-10 13:30:39.983333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=65194d10 00:07:00.692 [2024-07-10 13:30:39.983468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728edc20d3, Actual=a576a7728ecc20d3 00:07:00.692 [2024-07-10 13:30:39.983670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4827a266, Actual=88010a2d4837a266 00:07:00.692 [2024-07-10 13:30:39.983877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.984094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.984306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.692 [2024-07-10 13:30:39.984509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.692 [2024-07-10 13:30:39.984724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f70d9482b67db574 00:07:00.692 [2024-07-10 13:30:39.984844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f728c0f7e20878f2 00:07:00.692 passed 00:07:00.692 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-10 13:30:39.985036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:00.692 [2024-07-10 13:30:39.985251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:00.692 [2024-07-10 13:30:39.985458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.985660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.985885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.986088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.986301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=43c4 00:07:00.692 [2024-07-10 13:30:39.986422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5c78 00:07:00.692 [2024-07-10 13:30:39.986548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:00.692 [2024-07-10 13:30:39.986751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:00.692 [2024-07-10 13:30:39.986977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.987189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.692 [2024-07-10 13:30:39.987392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.987598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.692 [2024-07-10 13:30:39.987804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1ff49485 00:07:00.692 [2024-07-10 13:30:39.987928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=65194d10 00:07:00.692 [2024-07-10 13:30:39.988055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728edc20d3, Actual=a576a7728ecc20d3 00:07:00.692 [2024-07-10 13:30:39.988271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4827a266, Actual=88010a2d4837a266 00:07:00.693 [2024-07-10 13:30:39.988475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.988681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.988888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.693 [2024-07-10 13:30:39.989098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.693 [2024-07-10 13:30:39.989315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f70d9482b67db574 00:07:00.693 [2024-07-10 13:30:39.989441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f728c0f7e20878f2 00:07:00.693 passed 00:07:00.693 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-10 13:30:39.989643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:00.693 [2024-07-10 13:30:39.989849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:00.693 [2024-07-10 13:30:39.990056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.990265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.990491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.990693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.990900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=43c4 00:07:00.693 [2024-07-10 13:30:39.991027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5c78 00:07:00.693 passed 00:07:00.693 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-10 13:30:39.991219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:00.693 [2024-07-10 13:30:39.991426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:00.693 [2024-07-10 13:30:39.991642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.991844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.992051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.992261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.992470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1ff49485 00:07:00.693 [2024-07-10 13:30:39.992590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=65194d10 00:07:00.693 [2024-07-10 13:30:39.992744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728edc20d3, Actual=a576a7728ecc20d3 00:07:00.693 [2024-07-10 13:30:39.992961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4827a266, Actual=88010a2d4837a266 00:07:00.693 [2024-07-10 13:30:39.993165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.993372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.993577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.693 [2024-07-10 13:30:39.993783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.693 [2024-07-10 13:30:39.993995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f70d9482b67db574 00:07:00.693 [2024-07-10 13:30:39.994119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f728c0f7e20878f2 00:07:00.693 passed 00:07:00.693 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-10 13:30:39.994304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:07:00.693 [2024-07-10 13:30:39.994516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:07:00.693 [2024-07-10 13:30:39.994717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.994922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.995148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.995352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.995558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=43c4 00:07:00.693 [2024-07-10 13:30:39.995676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5c78 00:07:00.693 passed 00:07:00.693 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-10 13:30:39.995870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1aa753ed, Actual=1ab753ed 00:07:00.693 [2024-07-10 13:30:39.996074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38474660, Actual=38574660 00:07:00.693 [2024-07-10 13:30:39.996314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.996524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.996734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.996936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058 00:07:00.693 [2024-07-10 13:30:39.997143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1ff49485 00:07:00.693 [2024-07-10 13:30:39.997262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=65194d10 00:07:00.693 [2024-07-10 13:30:39.997414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728edc20d3, Actual=a576a7728ecc20d3 00:07:00.693 [2024-07-10 13:30:39.997621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4827a266, Actual=88010a2d4837a266 00:07:00.693 [2024-07-10 13:30:39.997828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.998030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:07:00.693 [2024-07-10 13:30:39.998237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.693 [2024-07-10 13:30:39.998439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:07:00.693 [2024-07-10 13:30:39.998653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=f70d9482b67db574 00:07:00.693 [2024-07-10 13:30:39.998781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f728c0f7e20878f2 00:07:00.693 passed 00:07:00.693 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:00.693 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:00.693 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:00.693 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:00.693 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:00.693 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:00.693 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:00.693 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:00.693 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:00.693 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-10 13:30:40.026669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fdcc, Actual=fd4c 00:07:00.693 [2024-07-10 13:30:40.027387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7a6f, Actual=7aef 00:07:00.693 [2024-07-10 13:30:40.028080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.693 [2024-07-10 13:30:40.028770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.693 [2024-07-10 13:30:40.029460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.693 [2024-07-10 13:30:40.030142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.693 [2024-07-10 13:30:40.030827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=fa38 00:07:00.693 [2024-07-10 13:30:40.031515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3640 00:07:00.693 [2024-07-10 13:30:40.032214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a3753ed, Actual=1ab753ed 00:07:00.693 [2024-07-10 13:30:40.032903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3269ab4d, Actual=32e9ab4d 00:07:00.693 [2024-07-10 13:30:40.033595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.693 [2024-07-10 13:30:40.034296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.693 [2024-07-10 13:30:40.034987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.693 [2024-07-10 13:30:40.035679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.693 [2024-07-10 13:30:40.036371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bc08038e 00:07:00.693 [2024-07-10 13:30:40.037064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=8e214b8a 00:07:00.693 [2024-07-10 13:30:40.037749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728e4c20d3, Actual=a576a7728ecc20d3 00:07:00.693 [2024-07-10 13:30:40.038460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7466cc0998d9f212, Actual=7466cc099859f212 00:07:00.693 [2024-07-10 13:30:40.039150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.693 [2024-07-10 13:30:40.039843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.693 [2024-07-10 13:30:40.040533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.694 [2024-07-10 13:30:40.041227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.694 [2024-07-10 13:30:40.041911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=73c4b6d3ee05ee3c 00:07:00.694 [2024-07-10 13:30:40.042614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f48c93b38746af3f 00:07:00.694 passed 00:07:00.694 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-10 13:30:40.042890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fdcc, Actual=fd4c 00:07:00.694 [2024-07-10 13:30:40.043077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7a6f, Actual=7aef 00:07:00.694 [2024-07-10 13:30:40.043261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.694 [2024-07-10 13:30:40.043440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.694 [2024-07-10 13:30:40.043633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.694 [2024-07-10 13:30:40.043827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.694 [2024-07-10 13:30:40.044003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=fa38 00:07:00.694 [2024-07-10 13:30:40.044190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3640 00:07:00.694 [2024-07-10 13:30:40.044369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a3753ed, Actual=1ab753ed 00:07:00.694 [2024-07-10 13:30:40.044554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3269ab4d, Actual=32e9ab4d 00:07:00.694 [2024-07-10 13:30:40.044746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.694 [2024-07-10 13:30:40.044933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.694 [2024-07-10 13:30:40.045112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.694 [2024-07-10 13:30:40.045299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.694 [2024-07-10 13:30:40.045475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bc08038e 00:07:00.694 [2024-07-10 13:30:40.045656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=8e214b8a 00:07:00.694 [2024-07-10 13:30:40.045847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728e4c20d3, Actual=a576a7728ecc20d3 00:07:00.694 [2024-07-10 13:30:40.046028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7466cc0998d9f212, Actual=7466cc099859f212 00:07:00.694 [2024-07-10 13:30:40.046212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.694 [2024-07-10 13:30:40.046389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.694 [2024-07-10 13:30:40.046572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.694 [2024-07-10 13:30:40.046749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.694 [2024-07-10 13:30:40.046938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=73c4b6d3ee05ee3c 00:07:00.694 [2024-07-10 13:30:40.047130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f48c93b38746af3f 00:07:00.694 passed 00:07:00.694 Test: dix_sec_512_md_0_error ...[2024-07-10 13:30:40.047226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:00.694 passed 00:07:00.694 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:07:00.694 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:00.694 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:00.954 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:00.954 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:00.954 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:00.954 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:00.954 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:00.954 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:00.954 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-10 13:30:40.075222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fdcc, Actual=fd4c 00:07:00.954 [2024-07-10 13:30:40.075930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7a6f, Actual=7aef 00:07:00.954 [2024-07-10 13:30:40.076627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.077310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.078009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.078707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.079394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=fa38 00:07:00.954 [2024-07-10 13:30:40.080094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3640 00:07:00.954 [2024-07-10 13:30:40.080782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a3753ed, Actual=1ab753ed 00:07:00.954 [2024-07-10 13:30:40.081470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3269ab4d, Actual=32e9ab4d 00:07:00.954 [2024-07-10 13:30:40.082161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.082849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.083539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.084235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.084922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bc08038e 00:07:00.954 [2024-07-10 13:30:40.085609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=8e214b8a 00:07:00.954 [2024-07-10 13:30:40.086309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728e4c20d3, Actual=a576a7728ecc20d3 00:07:00.954 [2024-07-10 13:30:40.086996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7466cc0998d9f212, Actual=7466cc099859f212 00:07:00.954 [2024-07-10 13:30:40.087685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.088376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.089069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.954 [2024-07-10 13:30:40.089760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.954 [2024-07-10 13:30:40.090458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=73c4b6d3ee05ee3c 00:07:00.954 [2024-07-10 13:30:40.091148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f48c93b38746af3f 00:07:00.954 passed 00:07:00.954 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-10 13:30:40.091441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fdcc, Actual=fd4c 00:07:00.954 [2024-07-10 13:30:40.091625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7a6f, Actual=7aef 00:07:00.954 [2024-07-10 13:30:40.091811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.091997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.092201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.092381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.092568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=fa38 00:07:00.954 [2024-07-10 13:30:40.092744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3640 00:07:00.954 [2024-07-10 13:30:40.092929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a3753ed, Actual=1ab753ed 00:07:00.954 [2024-07-10 13:30:40.093129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3269ab4d, Actual=32e9ab4d 00:07:00.954 [2024-07-10 13:30:40.093322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.093509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.093687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.093876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=800059 00:07:00.954 [2024-07-10 13:30:40.094057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=bc08038e 00:07:00.954 [2024-07-10 13:30:40.094243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=8e214b8a 00:07:00.954 [2024-07-10 13:30:40.094430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728e4c20d3, Actual=a576a7728ecc20d3 00:07:00.954 [2024-07-10 13:30:40.094614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7466cc0998d9f212, Actual=7466cc099859f212 00:07:00.954 [2024-07-10 13:30:40.094792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.094981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8 00:07:00.954 [2024-07-10 13:30:40.095161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.954 [2024-07-10 13:30:40.095348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000000059 00:07:00.954 [2024-07-10 13:30:40.095530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=73c4b6d3ee05ee3c 00:07:00.955 [2024-07-10 13:30:40.095713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f48c93b38746af3f 00:07:00.955 passed 00:07:00.955 Test: set_md_interleave_iovs_test ...passed 00:07:00.955 Test: set_md_interleave_iovs_split_test ...passed 00:07:00.955 Test: dif_generate_stream_pi_16_test ...passed 00:07:00.955 Test: dif_generate_stream_test ...passed 00:07:00.955 Test: set_md_interleave_iovs_alignment_test ...passed 00:07:00.955 Test: dif_generate_split_test ...[2024-07-10 13:30:40.100741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:00.955 passed 00:07:00.955 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:00.955 Test: dif_verify_split_test ...passed 00:07:00.955 Test: dif_verify_stream_multi_segments_test ...passed 00:07:00.955 Test: update_crc32c_pi_16_test ...passed 00:07:00.955 Test: update_crc32c_test ...passed 00:07:00.955 Test: dif_update_crc32c_split_test ...passed 00:07:00.955 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:00.955 Test: get_range_with_md_test ...passed 00:07:00.955 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:00.955 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:00.955 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:00.955 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:00.955 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:00.955 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:00.955 Test: dif_generate_and_verify_unmap_test ...passed 00:07:00.955 00:07:00.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.955 suites 1 1 n/a 0 0 00:07:00.955 tests 79 79 79 0 0 00:07:00.955 asserts 3584 3584 3584 0 n/a 00:07:00.955 00:07:00.955 Elapsed time = 0.222 seconds 00:07:00.955 13:30:40 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:00.955 00:07:00.955 00:07:00.955 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.955 http://cunit.sourceforge.net/ 00:07:00.955 00:07:00.955 00:07:00.955 Suite: iov 00:07:00.955 Test: test_single_iov ...passed 00:07:00.955 Test: test_simple_iov ...passed 00:07:00.955 Test: test_complex_iov ...passed 00:07:00.955 Test: test_iovs_to_buf ...passed 00:07:00.955 Test: test_buf_to_iovs ...passed 00:07:00.955 Test: test_memset ...passed 00:07:00.955 Test: test_iov_one ...passed 00:07:00.955 Test: test_iov_xfer ...passed 00:07:00.955 00:07:00.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.955 suites 1 1 n/a 0 0 00:07:00.955 tests 8 8 8 0 0 00:07:00.955 asserts 156 156 156 0 n/a 00:07:00.955 00:07:00.955 Elapsed time = 0.000 seconds 00:07:00.955 13:30:40 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:00.955 00:07:00.955 00:07:00.955 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.955 http://cunit.sourceforge.net/ 00:07:00.955 00:07:00.955 00:07:00.955 Suite: math 00:07:00.955 Test: test_serial_number_arithmetic ...passed 00:07:00.955 Suite: erase 00:07:00.955 Test: test_memset_s ...passed 00:07:00.955 00:07:00.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.955 suites 2 2 n/a 0 0 00:07:00.955 tests 2 2 2 0 0 00:07:00.955 asserts 18 18 18 0 n/a 00:07:00.955 00:07:00.955 Elapsed time = 0.000 seconds 00:07:00.955 13:30:40 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:00.955 00:07:00.955 00:07:00.955 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.955 http://cunit.sourceforge.net/ 00:07:00.955 00:07:00.955 00:07:00.955 Suite: pipe 00:07:00.955 Test: test_create_destroy ...passed 00:07:00.955 Test: test_write_get_buffer ...passed 00:07:00.955 Test: test_write_advance ...passed 00:07:00.955 Test: test_read_get_buffer ...passed 00:07:00.955 Test: test_read_advance ...passed 00:07:00.955 Test: test_data ...passed 00:07:00.955 00:07:00.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.955 suites 1 1 n/a 0 0 00:07:00.955 tests 6 6 6 0 0 00:07:00.955 asserts 250 250 250 0 n/a 00:07:00.955 00:07:00.955 Elapsed time = 0.000 seconds 00:07:00.955 13:30:40 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:00.955 00:07:00.955 00:07:00.955 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.955 http://cunit.sourceforge.net/ 00:07:00.955 00:07:00.955 00:07:00.955 Suite: xor 00:07:00.955 Test: test_xor_gen ...passed 00:07:00.955 00:07:00.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.955 suites 1 1 n/a 0 0 00:07:00.955 tests 1 1 1 0 0 00:07:00.955 asserts 17 17 17 0 n/a 00:07:00.955 00:07:00.955 Elapsed time = 0.009 seconds 00:07:00.955 00:07:00.955 real 0m0.740s 00:07:00.955 user 0m0.485s 00:07:00.955 sys 0m0.245s 00:07:00.955 13:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.955 ************************************ 00:07:00.955 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.955 END TEST unittest_util 00:07:00.955 ************************************ 00:07:01.215 13:30:40 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:01.215 13:30:40 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:01.215 13:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.215 13:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.215 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.215 ************************************ 00:07:01.215 START TEST unittest_vhost 00:07:01.215 ************************************ 00:07:01.215 13:30:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:01.215 00:07:01.215 00:07:01.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.215 http://cunit.sourceforge.net/ 00:07:01.215 00:07:01.215 00:07:01.215 Suite: vhost_suite 00:07:01.215 Test: desc_to_iov_test ...[2024-07-10 13:30:40.370292] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:01.215 passed 00:07:01.215 Test: create_controller_test ...[2024-07-10 13:30:40.375705] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:01.215 [2024-07-10 13:30:40.375853] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:01.215 [2024-07-10 13:30:40.375978] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:01.215 [2024-07-10 13:30:40.376076] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:01.215 [2024-07-10 13:30:40.376167] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:01.215 [2024-07-10 13:30:40.376294] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-10 13:30:40.377569] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:01.215 passed 00:07:01.215 Test: session_find_by_vid_test ...passed 00:07:01.215 Test: remove_controller_test ...[2024-07-10 13:30:40.379866] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:01.215 passed 00:07:01.215 Test: vq_avail_ring_get_test ...passed 00:07:01.215 Test: vq_packed_ring_test ...passed 00:07:01.215 Test: vhost_blk_construct_test ...passed 00:07:01.215 00:07:01.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.215 suites 1 1 n/a 0 0 00:07:01.215 tests 7 7 7 0 0 00:07:01.215 asserts 145 145 145 0 n/a 00:07:01.215 00:07:01.215 Elapsed time = 0.013 seconds 00:07:01.215 ************************************ 00:07:01.215 END TEST unittest_vhost 00:07:01.215 ************************************ 00:07:01.215 00:07:01.215 real 0m0.064s 00:07:01.215 user 0m0.046s 00:07:01.215 sys 0m0.017s 00:07:01.215 13:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.215 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.215 13:30:40 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:01.215 13:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.216 13:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.216 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.216 ************************************ 00:07:01.216 START TEST unittest_dma 00:07:01.216 ************************************ 00:07:01.216 13:30:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:01.216 00:07:01.216 00:07:01.216 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.216 http://cunit.sourceforge.net/ 00:07:01.216 00:07:01.216 00:07:01.216 Suite: dma_suite 00:07:01.216 Test: test_dma ...[2024-07-10 13:30:40.480018] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:01.216 passed 00:07:01.216 00:07:01.216 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.216 suites 1 1 n/a 0 0 00:07:01.216 tests 1 1 1 0 0 00:07:01.216 asserts 50 50 50 0 n/a 00:07:01.216 00:07:01.216 Elapsed time = 0.001 seconds 00:07:01.216 ************************************ 00:07:01.216 END TEST unittest_dma 00:07:01.216 ************************************ 00:07:01.216 00:07:01.216 real 0m0.032s 00:07:01.216 user 0m0.023s 00:07:01.216 sys 0m0.008s 00:07:01.216 13:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.216 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.216 13:30:40 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:07:01.216 13:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.216 13:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.216 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.216 ************************************ 00:07:01.216 START TEST unittest_init 00:07:01.216 ************************************ 00:07:01.216 13:30:40 -- common/autotest_common.sh@1104 -- # unittest_init 00:07:01.216 13:30:40 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:01.216 00:07:01.216 00:07:01.216 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.216 http://cunit.sourceforge.net/ 00:07:01.216 00:07:01.216 00:07:01.216 Suite: subsystem_suite 00:07:01.216 Test: subsystem_sort_test_depends_on_single ...passed 00:07:01.216 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:01.216 Test: subsystem_sort_test_missing_dependency ...[2024-07-10 13:30:40.575289] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:01.216 [2024-07-10 13:30:40.575756] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:01.216 passed 00:07:01.216 00:07:01.216 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.216 suites 1 1 n/a 0 0 00:07:01.216 tests 3 3 3 0 0 00:07:01.216 asserts 20 20 20 0 n/a 00:07:01.216 00:07:01.216 Elapsed time = 0.001 seconds 00:07:01.476 ************************************ 00:07:01.476 END TEST unittest_init 00:07:01.476 ************************************ 00:07:01.476 00:07:01.476 real 0m0.050s 00:07:01.476 user 0m0.029s 00:07:01.476 sys 0m0.020s 00:07:01.476 13:30:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.476 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.476 13:30:40 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:07:01.476 13:30:40 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:01.476 13:30:40 -- unit/unittest.sh@290 -- # hostname 00:07:01.476 13:30:40 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:01.476 geninfo: WARNING: invalid characters removed from testname! 00:07:28.075 13:31:04 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:29.979 13:31:09 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:33.265 13:31:11 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:35.163 13:31:14 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:37.693 13:31:16 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:39.595 13:31:18 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:42.129 13:31:20 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:44.031 13:31:23 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:44.031 13:31:23 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:44.290 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:44.290 Found 309 entries. 00:07:44.290 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:44.290 Writing .css and .png files. 00:07:44.290 Generating output. 00:07:44.549 Processing file include/linux/virtio_ring.h 00:07:44.809 Processing file include/spdk/histogram_data.h 00:07:44.809 Processing file include/spdk/endian.h 00:07:44.809 Processing file include/spdk/util.h 00:07:44.809 Processing file include/spdk/base64.h 00:07:44.809 Processing file include/spdk/mmio.h 00:07:44.809 Processing file include/spdk/nvme_spec.h 00:07:44.809 Processing file include/spdk/trace.h 00:07:44.810 Processing file include/spdk/bdev_module.h 00:07:44.810 Processing file include/spdk/thread.h 00:07:44.810 Processing file include/spdk/nvmf_transport.h 00:07:44.810 Processing file include/spdk/nvme.h 00:07:44.810 Processing file include/spdk_internal/nvme_tcp.h 00:07:44.810 Processing file include/spdk_internal/sock.h 00:07:44.810 Processing file include/spdk_internal/utf.h 00:07:44.810 Processing file include/spdk_internal/rdma.h 00:07:44.810 Processing file include/spdk_internal/sgl.h 00:07:44.810 Processing file include/spdk_internal/virtio.h 00:07:45.069 Processing file lib/accel/accel_sw.c 00:07:45.069 Processing file lib/accel/accel_rpc.c 00:07:45.069 Processing file lib/accel/accel.c 00:07:45.329 Processing file lib/bdev/bdev_rpc.c 00:07:45.329 Processing file lib/bdev/bdev_zone.c 00:07:45.329 Processing file lib/bdev/scsi_nvme.c 00:07:45.329 Processing file lib/bdev/part.c 00:07:45.329 Processing file lib/bdev/bdev.c 00:07:45.588 Processing file lib/blob/blobstore.h 00:07:45.588 Processing file lib/blob/blobstore.c 00:07:45.588 Processing file lib/blob/blob_bs_dev.c 00:07:45.588 Processing file lib/blob/zeroes.c 00:07:45.588 Processing file lib/blob/request.c 00:07:45.588 Processing file lib/blobfs/tree.c 00:07:45.588 Processing file lib/blobfs/blobfs.c 00:07:45.588 Processing file lib/conf/conf.c 00:07:45.847 Processing file lib/dma/dma.c 00:07:46.106 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:46.106 Processing file lib/env_dpdk/memory.c 00:07:46.106 Processing file lib/env_dpdk/threads.c 00:07:46.106 Processing file lib/env_dpdk/env.c 00:07:46.106 Processing file lib/env_dpdk/pci_dpdk.c 00:07:46.106 Processing file lib/env_dpdk/init.c 00:07:46.106 Processing file lib/env_dpdk/sigbus_handler.c 00:07:46.106 Processing file lib/env_dpdk/pci_idxd.c 00:07:46.106 Processing file lib/env_dpdk/pci_ioat.c 00:07:46.106 Processing file lib/env_dpdk/pci_vmd.c 00:07:46.106 Processing file lib/env_dpdk/pci_event.c 00:07:46.106 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:46.106 Processing file lib/env_dpdk/pci_virtio.c 00:07:46.106 Processing file lib/env_dpdk/pci.c 00:07:46.106 Processing file lib/event/app.c 00:07:46.106 Processing file lib/event/reactor.c 00:07:46.106 Processing file lib/event/scheduler_static.c 00:07:46.106 Processing file lib/event/log_rpc.c 00:07:46.106 Processing file lib/event/app_rpc.c 00:07:46.672 Processing file lib/ftl/ftl_writer.c 00:07:46.672 Processing file lib/ftl/ftl_l2p_flat.c 00:07:46.672 Processing file lib/ftl/ftl_band.c 00:07:46.672 Processing file lib/ftl/ftl_reloc.c 00:07:46.672 Processing file lib/ftl/ftl_writer.h 00:07:46.672 Processing file lib/ftl/ftl_nv_cache.h 00:07:46.672 Processing file lib/ftl/ftl_debug.h 00:07:46.672 Processing file lib/ftl/ftl_l2p.c 00:07:46.672 Processing file lib/ftl/ftl_core.h 00:07:46.672 Processing file lib/ftl/ftl_trace.c 00:07:46.672 Processing file lib/ftl/ftl_p2l.c 00:07:46.672 Processing file lib/ftl/ftl_band.h 00:07:46.672 Processing file lib/ftl/ftl_io.c 00:07:46.672 Processing file lib/ftl/ftl_band_ops.c 00:07:46.672 Processing file lib/ftl/ftl_debug.c 00:07:46.672 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:46.672 Processing file lib/ftl/ftl_l2p_cache.c 00:07:46.672 Processing file lib/ftl/ftl_rq.c 00:07:46.672 Processing file lib/ftl/ftl_layout.c 00:07:46.672 Processing file lib/ftl/ftl_io.h 00:07:46.672 Processing file lib/ftl/ftl_nv_cache.c 00:07:46.672 Processing file lib/ftl/ftl_core.c 00:07:46.672 Processing file lib/ftl/ftl_init.c 00:07:46.672 Processing file lib/ftl/ftl_sb.c 00:07:46.672 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:46.672 Processing file lib/ftl/base/ftl_base_dev.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:46.930 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:46.931 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:46.931 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:46.931 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:46.931 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:46.931 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:46.931 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:46.931 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:46.931 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:46.931 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:46.931 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:46.931 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:47.217 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:47.217 Processing file lib/ftl/utils/ftl_md.c 00:07:47.217 Processing file lib/ftl/utils/ftl_property.h 00:07:47.217 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:47.217 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:47.217 Processing file lib/ftl/utils/ftl_mempool.c 00:07:47.217 Processing file lib/ftl/utils/ftl_conf.c 00:07:47.217 Processing file lib/ftl/utils/ftl_property.c 00:07:47.217 Processing file lib/ftl/utils/ftl_df.h 00:07:47.217 Processing file lib/idxd/idxd.c 00:07:47.217 Processing file lib/idxd/idxd_user.c 00:07:47.217 Processing file lib/idxd/idxd_internal.h 00:07:47.477 Processing file lib/init/rpc.c 00:07:47.477 Processing file lib/init/subsystem.c 00:07:47.477 Processing file lib/init/json_config.c 00:07:47.477 Processing file lib/init/subsystem_rpc.c 00:07:47.477 Processing file lib/ioat/ioat_internal.h 00:07:47.477 Processing file lib/ioat/ioat.c 00:07:47.748 Processing file lib/iscsi/iscsi.c 00:07:47.748 Processing file lib/iscsi/md5.c 00:07:47.748 Processing file lib/iscsi/task.h 00:07:47.748 Processing file lib/iscsi/iscsi.h 00:07:47.748 Processing file lib/iscsi/conn.c 00:07:47.748 Processing file lib/iscsi/param.c 00:07:47.748 Processing file lib/iscsi/init_grp.c 00:07:47.748 Processing file lib/iscsi/iscsi_subsystem.c 00:07:47.748 Processing file lib/iscsi/portal_grp.c 00:07:47.748 Processing file lib/iscsi/task.c 00:07:47.748 Processing file lib/iscsi/iscsi_rpc.c 00:07:47.748 Processing file lib/iscsi/tgt_node.c 00:07:48.006 Processing file lib/json/json_util.c 00:07:48.006 Processing file lib/json/json_parse.c 00:07:48.006 Processing file lib/json/json_write.c 00:07:48.006 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:48.006 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:48.006 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:48.006 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:48.006 Processing file lib/log/log_flags.c 00:07:48.006 Processing file lib/log/log.c 00:07:48.006 Processing file lib/log/log_deprecated.c 00:07:48.265 Processing file lib/lvol/lvol.c 00:07:48.265 Processing file lib/nbd/nbd_rpc.c 00:07:48.265 Processing file lib/nbd/nbd.c 00:07:48.265 Processing file lib/notify/notify_rpc.c 00:07:48.265 Processing file lib/notify/notify.c 00:07:48.865 Processing file lib/nvme/nvme_opal.c 00:07:48.865 Processing file lib/nvme/nvme_internal.h 00:07:48.865 Processing file lib/nvme/nvme_pcie.c 00:07:48.865 Processing file lib/nvme/nvme_pcie_common.c 00:07:48.865 Processing file lib/nvme/nvme_zns.c 00:07:48.865 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:48.865 Processing file lib/nvme/nvme_cuse.c 00:07:48.865 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:48.865 Processing file lib/nvme/nvme_quirks.c 00:07:48.865 Processing file lib/nvme/nvme_rdma.c 00:07:48.865 Processing file lib/nvme/nvme_ns.c 00:07:48.865 Processing file lib/nvme/nvme_poll_group.c 00:07:48.865 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:48.865 Processing file lib/nvme/nvme_ns_cmd.c 00:07:48.865 Processing file lib/nvme/nvme_tcp.c 00:07:48.865 Processing file lib/nvme/nvme_qpair.c 00:07:48.865 Processing file lib/nvme/nvme_pcie_internal.h 00:07:48.865 Processing file lib/nvme/nvme_discovery.c 00:07:48.865 Processing file lib/nvme/nvme.c 00:07:48.865 Processing file lib/nvme/nvme_fabric.c 00:07:48.865 Processing file lib/nvme/nvme_ctrlr.c 00:07:48.865 Processing file lib/nvme/nvme_transport.c 00:07:48.865 Processing file lib/nvme/nvme_io_msg.c 00:07:48.865 Processing file lib/nvme/nvme_vfio_user.c 00:07:49.430 Processing file lib/nvmf/transport.c 00:07:49.430 Processing file lib/nvmf/subsystem.c 00:07:49.430 Processing file lib/nvmf/nvmf.c 00:07:49.430 Processing file lib/nvmf/nvmf_rpc.c 00:07:49.430 Processing file lib/nvmf/rdma.c 00:07:49.430 Processing file lib/nvmf/ctrlr_bdev.c 00:07:49.430 Processing file lib/nvmf/ctrlr.c 00:07:49.430 Processing file lib/nvmf/tcp.c 00:07:49.430 Processing file lib/nvmf/ctrlr_discovery.c 00:07:49.430 Processing file lib/nvmf/nvmf_internal.h 00:07:49.430 Processing file lib/rdma/common.c 00:07:49.430 Processing file lib/rdma/rdma_verbs.c 00:07:49.430 Processing file lib/rpc/rpc.c 00:07:49.686 Processing file lib/scsi/scsi_pr.c 00:07:49.686 Processing file lib/scsi/scsi.c 00:07:49.686 Processing file lib/scsi/scsi_bdev.c 00:07:49.686 Processing file lib/scsi/port.c 00:07:49.686 Processing file lib/scsi/lun.c 00:07:49.686 Processing file lib/scsi/dev.c 00:07:49.686 Processing file lib/scsi/task.c 00:07:49.686 Processing file lib/scsi/scsi_rpc.c 00:07:49.686 Processing file lib/sock/sock_rpc.c 00:07:49.686 Processing file lib/sock/sock.c 00:07:49.944 Processing file lib/thread/iobuf.c 00:07:49.944 Processing file lib/thread/thread.c 00:07:49.944 Processing file lib/trace/trace_rpc.c 00:07:49.944 Processing file lib/trace/trace.c 00:07:49.945 Processing file lib/trace/trace_flags.c 00:07:49.945 Processing file lib/trace_parser/trace.cpp 00:07:49.945 Processing file lib/ut/ut.c 00:07:50.203 Processing file lib/ut_mock/mock.c 00:07:50.462 Processing file lib/util/uuid.c 00:07:50.462 Processing file lib/util/crc32.c 00:07:50.462 Processing file lib/util/cpuset.c 00:07:50.462 Processing file lib/util/pipe.c 00:07:50.462 Processing file lib/util/hexlify.c 00:07:50.462 Processing file lib/util/strerror_tls.c 00:07:50.462 Processing file lib/util/crc32c.c 00:07:50.462 Processing file lib/util/crc16.c 00:07:50.462 Processing file lib/util/crc32_ieee.c 00:07:50.462 Processing file lib/util/iov.c 00:07:50.462 Processing file lib/util/dif.c 00:07:50.462 Processing file lib/util/base64.c 00:07:50.462 Processing file lib/util/file.c 00:07:50.462 Processing file lib/util/xor.c 00:07:50.462 Processing file lib/util/math.c 00:07:50.462 Processing file lib/util/bit_array.c 00:07:50.462 Processing file lib/util/crc64.c 00:07:50.462 Processing file lib/util/fd_group.c 00:07:50.462 Processing file lib/util/string.c 00:07:50.462 Processing file lib/util/fd.c 00:07:50.462 Processing file lib/util/zipf.c 00:07:50.462 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:50.462 Processing file lib/vfio_user/host/vfio_user.c 00:07:50.720 Processing file lib/vhost/vhost_blk.c 00:07:50.720 Processing file lib/vhost/vhost_rpc.c 00:07:50.720 Processing file lib/vhost/rte_vhost_user.c 00:07:50.720 Processing file lib/vhost/vhost_scsi.c 00:07:50.720 Processing file lib/vhost/vhost.c 00:07:50.720 Processing file lib/vhost/vhost_internal.h 00:07:50.720 Processing file lib/virtio/virtio_vfio_user.c 00:07:50.720 Processing file lib/virtio/virtio.c 00:07:50.720 Processing file lib/virtio/virtio_vhost_user.c 00:07:50.721 Processing file lib/virtio/virtio_pci.c 00:07:50.979 Processing file lib/vmd/vmd.c 00:07:50.979 Processing file lib/vmd/led.c 00:07:50.979 Processing file module/accel/dsa/accel_dsa.c 00:07:50.979 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:50.979 Processing file module/accel/error/accel_error.c 00:07:50.979 Processing file module/accel/error/accel_error_rpc.c 00:07:50.979 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:50.979 Processing file module/accel/iaa/accel_iaa.c 00:07:51.238 Processing file module/accel/ioat/accel_ioat.c 00:07:51.238 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:51.238 Processing file module/bdev/aio/bdev_aio.c 00:07:51.238 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:51.238 Processing file module/bdev/delay/vbdev_delay.c 00:07:51.238 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:51.498 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:51.498 Processing file module/bdev/error/vbdev_error.c 00:07:51.498 Processing file module/bdev/ftl/bdev_ftl.c 00:07:51.498 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:51.498 Processing file module/bdev/gpt/gpt.h 00:07:51.498 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:51.498 Processing file module/bdev/gpt/gpt.c 00:07:51.498 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:51.498 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:51.758 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:51.758 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:51.758 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:51.758 Processing file module/bdev/malloc/bdev_malloc.c 00:07:51.758 Processing file module/bdev/null/bdev_null.c 00:07:51.758 Processing file module/bdev/null/bdev_null_rpc.c 00:07:52.018 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:52.018 Processing file module/bdev/nvme/bdev_nvme.c 00:07:52.018 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:52.018 Processing file module/bdev/nvme/vbdev_opal.c 00:07:52.018 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:52.018 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:52.018 Processing file module/bdev/nvme/nvme_rpc.c 00:07:52.018 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:52.018 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:52.278 Processing file module/bdev/raid/concat.c 00:07:52.278 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:52.278 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:52.278 Processing file module/bdev/raid/raid5f.c 00:07:52.278 Processing file module/bdev/raid/bdev_raid.h 00:07:52.278 Processing file module/bdev/raid/raid1.c 00:07:52.278 Processing file module/bdev/raid/raid0.c 00:07:52.278 Processing file module/bdev/raid/bdev_raid.c 00:07:52.278 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:52.278 Processing file module/bdev/split/vbdev_split.c 00:07:52.278 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:52.278 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:52.278 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:52.537 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:52.537 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:52.537 Processing file module/blob/bdev/blob_bdev.c 00:07:52.537 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:52.537 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:52.537 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:52.537 Processing file module/event/subsystems/accel/accel.c 00:07:52.537 Processing file module/event/subsystems/bdev/bdev.c 00:07:52.537 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:52.537 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:52.797 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:52.797 Processing file module/event/subsystems/nbd/nbd.c 00:07:52.797 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:52.797 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:52.797 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:52.797 Processing file module/event/subsystems/scsi/scsi.c 00:07:53.056 Processing file module/event/subsystems/sock/sock.c 00:07:53.056 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:53.056 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:53.056 Processing file module/event/subsystems/vmd/vmd.c 00:07:53.056 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:53.056 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:53.056 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:53.315 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:53.315 Processing file module/sock/sock_kernel.h 00:07:53.315 Processing file module/sock/posix/posix.c 00:07:53.315 Writing directory view page. 00:07:53.315 Overall coverage rate: 00:07:53.315 lines......: 39.1% (39263 of 100392 lines) 00:07:53.315 functions..: 42.8% (3587 of 8384 functions) 00:07:53.315 00:07:53.315 00:07:53.315 ===================== 00:07:53.315 All unit tests passed 00:07:53.315 ===================== 00:07:53.315 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:53.315 13:31:32 -- unit/unittest.sh@302 -- # set +x 00:07:53.315 00:07:53.315 00:07:53.315 00:07:53.315 real 2m51.374s 00:07:53.315 user 2m28.560s 00:07:53.315 sys 0m14.889s 00:07:53.315 13:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.315 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.315 ************************************ 00:07:53.315 END TEST unittest 00:07:53.315 ************************************ 00:07:53.315 13:31:32 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:53.315 13:31:32 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:53.315 13:31:32 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:53.315 13:31:32 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:53.315 13:31:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:53.315 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 13:31:32 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:53.316 13:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.316 13:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.316 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.316 ************************************ 00:07:53.316 START TEST env 00:07:53.316 ************************************ 00:07:53.316 13:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:53.576 * Looking for test storage... 00:07:53.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:53.576 13:31:32 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:53.576 13:31:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.576 13:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.576 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.576 ************************************ 00:07:53.576 START TEST env_memory 00:07:53.576 ************************************ 00:07:53.576 13:31:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:53.576 00:07:53.576 00:07:53.576 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.576 http://cunit.sourceforge.net/ 00:07:53.576 00:07:53.576 00:07:53.576 Suite: memory 00:07:53.576 Test: alloc and free memory map ...[2024-07-10 13:31:32.799461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:53.576 passed 00:07:53.576 Test: mem map translation ...[2024-07-10 13:31:32.848874] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:53.576 [2024-07-10 13:31:32.849304] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:53.576 [2024-07-10 13:31:32.849689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:53.576 [2024-07-10 13:31:32.849937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:53.576 passed 00:07:53.576 Test: mem map registration ...[2024-07-10 13:31:32.918388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:53.576 [2024-07-10 13:31:32.918480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:53.576 passed 00:07:53.835 Test: mem map adjacent registrations ...passed 00:07:53.835 00:07:53.835 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.835 suites 1 1 n/a 0 0 00:07:53.835 tests 4 4 4 0 0 00:07:53.835 asserts 152 152 152 0 n/a 00:07:53.835 00:07:53.835 Elapsed time = 0.220 seconds 00:07:53.835 00:07:53.835 real 0m0.255s 00:07:53.835 user 0m0.231s 00:07:53.835 sys 0m0.024s 00:07:53.835 13:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.835 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.835 ************************************ 00:07:53.835 END TEST env_memory 00:07:53.835 ************************************ 00:07:53.835 13:31:33 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:53.835 13:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.835 13:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.835 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.835 ************************************ 00:07:53.835 START TEST env_vtophys 00:07:53.835 ************************************ 00:07:53.835 13:31:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:53.835 EAL: lib.eal log level changed from notice to debug 00:07:53.835 EAL: Detected lcore 0 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 1 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 2 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 3 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 4 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 5 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 6 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 7 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 8 as core 0 on socket 0 00:07:53.835 EAL: Detected lcore 9 as core 0 on socket 0 00:07:53.835 EAL: Maximum logical cores by configuration: 128 00:07:53.835 EAL: Detected CPU lcores: 10 00:07:53.835 EAL: Detected NUMA nodes: 1 00:07:53.835 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:53.835 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:53.835 EAL: Checking presence of .so 'librte_eal.so' 00:07:53.835 EAL: Detected static linkage of DPDK 00:07:53.835 EAL: No shared files mode enabled, IPC will be disabled 00:07:53.835 EAL: Selected IOVA mode 'PA' 00:07:53.835 EAL: Probing VFIO support... 00:07:53.835 EAL: IOMMU type 1 (Type 1) is supported 00:07:53.835 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:53.835 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:53.835 EAL: VFIO support initialized 00:07:53.835 EAL: Ask a virtual area of 0x2e000 bytes 00:07:53.835 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:53.835 EAL: Setting up physically contiguous memory... 00:07:53.835 EAL: Setting maximum number of open files to 1048576 00:07:53.835 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:53.835 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:53.835 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.835 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:53.835 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.835 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.835 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:53.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:53.836 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:53.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.836 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:53.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:53.836 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.836 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:53.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.836 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.836 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:53.836 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:53.836 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.836 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:53.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.836 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.836 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:53.836 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:53.836 EAL: Hugepages will be freed exactly as allocated. 00:07:53.836 EAL: No shared files mode enabled, IPC is disabled 00:07:53.836 EAL: No shared files mode enabled, IPC is disabled 00:07:54.094 EAL: TSC frequency is ~2290000 KHz 00:07:54.094 EAL: Main lcore 0 is ready (tid=7f8ab59ada40;cpuset=[0]) 00:07:54.094 EAL: Trying to obtain current memory policy. 00:07:54.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.094 EAL: Restoring previous memory policy: 0 00:07:54.094 EAL: request: mp_malloc_sync 00:07:54.094 EAL: No shared files mode enabled, IPC is disabled 00:07:54.094 EAL: Heap on socket 0 was expanded by 2MB 00:07:54.094 EAL: No shared files mode enabled, IPC is disabled 00:07:54.094 EAL: Mem event callback 'spdk:(nil)' registered 00:07:54.094 00:07:54.094 00:07:54.094 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.094 http://cunit.sourceforge.net/ 00:07:54.094 00:07:54.094 00:07:54.094 Suite: components_suite 00:07:54.358 Test: vtophys_malloc_test ...passed 00:07:54.358 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:54.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.358 EAL: Restoring previous memory policy: 0 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was expanded by 4MB 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was shrunk by 4MB 00:07:54.358 EAL: Trying to obtain current memory policy. 00:07:54.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.358 EAL: Restoring previous memory policy: 0 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was expanded by 6MB 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was shrunk by 6MB 00:07:54.358 EAL: Trying to obtain current memory policy. 00:07:54.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.358 EAL: Restoring previous memory policy: 0 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was expanded by 10MB 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was shrunk by 10MB 00:07:54.358 EAL: Trying to obtain current memory policy. 00:07:54.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.358 EAL: Restoring previous memory policy: 0 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was expanded by 18MB 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was shrunk by 18MB 00:07:54.358 EAL: Trying to obtain current memory policy. 00:07:54.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.358 EAL: Restoring previous memory policy: 0 00:07:54.358 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.358 EAL: request: mp_malloc_sync 00:07:54.358 EAL: No shared files mode enabled, IPC is disabled 00:07:54.358 EAL: Heap on socket 0 was expanded by 34MB 00:07:54.619 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.619 EAL: request: mp_malloc_sync 00:07:54.619 EAL: No shared files mode enabled, IPC is disabled 00:07:54.619 EAL: Heap on socket 0 was shrunk by 34MB 00:07:54.619 EAL: Trying to obtain current memory policy. 00:07:54.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.619 EAL: Restoring previous memory policy: 0 00:07:54.619 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.619 EAL: request: mp_malloc_sync 00:07:54.619 EAL: No shared files mode enabled, IPC is disabled 00:07:54.619 EAL: Heap on socket 0 was expanded by 66MB 00:07:54.619 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.619 EAL: request: mp_malloc_sync 00:07:54.619 EAL: No shared files mode enabled, IPC is disabled 00:07:54.619 EAL: Heap on socket 0 was shrunk by 66MB 00:07:54.879 EAL: Trying to obtain current memory policy. 00:07:54.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.879 EAL: Restoring previous memory policy: 0 00:07:54.879 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.879 EAL: request: mp_malloc_sync 00:07:54.879 EAL: No shared files mode enabled, IPC is disabled 00:07:54.879 EAL: Heap on socket 0 was expanded by 130MB 00:07:55.138 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.138 EAL: request: mp_malloc_sync 00:07:55.138 EAL: No shared files mode enabled, IPC is disabled 00:07:55.138 EAL: Heap on socket 0 was shrunk by 130MB 00:07:55.138 EAL: Trying to obtain current memory policy. 00:07:55.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.397 EAL: Restoring previous memory policy: 0 00:07:55.397 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.397 EAL: request: mp_malloc_sync 00:07:55.397 EAL: No shared files mode enabled, IPC is disabled 00:07:55.397 EAL: Heap on socket 0 was expanded by 258MB 00:07:55.655 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.655 EAL: request: mp_malloc_sync 00:07:55.655 EAL: No shared files mode enabled, IPC is disabled 00:07:55.655 EAL: Heap on socket 0 was shrunk by 258MB 00:07:56.223 EAL: Trying to obtain current memory policy. 00:07:56.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:56.223 EAL: Restoring previous memory policy: 0 00:07:56.223 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.223 EAL: request: mp_malloc_sync 00:07:56.223 EAL: No shared files mode enabled, IPC is disabled 00:07:56.223 EAL: Heap on socket 0 was expanded by 514MB 00:07:57.159 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.159 EAL: request: mp_malloc_sync 00:07:57.159 EAL: No shared files mode enabled, IPC is disabled 00:07:57.159 EAL: Heap on socket 0 was shrunk by 514MB 00:07:58.094 EAL: Trying to obtain current memory policy. 00:07:58.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:58.094 EAL: Restoring previous memory policy: 0 00:07:58.094 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.094 EAL: request: mp_malloc_sync 00:07:58.094 EAL: No shared files mode enabled, IPC is disabled 00:07:58.094 EAL: Heap on socket 0 was expanded by 1026MB 00:08:00.025 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.284 EAL: request: mp_malloc_sync 00:08:00.284 EAL: No shared files mode enabled, IPC is disabled 00:08:00.284 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:01.664 passed 00:08:01.664 00:08:01.664 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.664 suites 1 1 n/a 0 0 00:08:01.664 tests 2 2 2 0 0 00:08:01.664 asserts 6545 6545 6545 0 n/a 00:08:01.664 00:08:01.664 Elapsed time = 7.657 seconds 00:08:01.664 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.664 EAL: request: mp_malloc_sync 00:08:01.664 EAL: No shared files mode enabled, IPC is disabled 00:08:01.664 EAL: Heap on socket 0 was shrunk by 2MB 00:08:01.664 EAL: No shared files mode enabled, IPC is disabled 00:08:01.664 EAL: No shared files mode enabled, IPC is disabled 00:08:01.664 EAL: No shared files mode enabled, IPC is disabled 00:08:01.925 ************************************ 00:08:01.925 END TEST env_vtophys 00:08:01.925 ************************************ 00:08:01.925 00:08:01.925 real 0m7.974s 00:08:01.925 user 0m7.130s 00:08:01.925 sys 0m0.693s 00:08:01.925 13:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.925 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:01.925 13:31:41 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:01.925 13:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:01.925 13:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.925 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:01.925 ************************************ 00:08:01.925 START TEST env_pci 00:08:01.925 ************************************ 00:08:01.925 13:31:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:01.925 00:08:01.925 00:08:01.925 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.925 http://cunit.sourceforge.net/ 00:08:01.925 00:08:01.925 00:08:01.925 Suite: pci 00:08:01.925 Test: pci_hook ...[2024-07-10 13:31:41.146713] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 104849 has claimed it 00:08:01.925 EAL: Cannot find device (10000:00:01.0) 00:08:01.925 EAL: Failed to attach device on primary process 00:08:01.925 passed 00:08:01.925 00:08:01.925 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.925 suites 1 1 n/a 0 0 00:08:01.925 tests 1 1 1 0 0 00:08:01.925 asserts 25 25 25 0 n/a 00:08:01.925 00:08:01.925 Elapsed time = 0.006 seconds 00:08:01.925 00:08:01.925 real 0m0.122s 00:08:01.925 user 0m0.077s 00:08:01.925 sys 0m0.045s 00:08:01.925 13:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.925 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:01.925 ************************************ 00:08:01.925 END TEST env_pci 00:08:01.925 ************************************ 00:08:01.925 13:31:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:01.925 13:31:41 -- env/env.sh@15 -- # uname 00:08:01.925 13:31:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:01.925 13:31:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:01.925 13:31:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:01.925 13:31:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:01.925 13:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.925 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.185 ************************************ 00:08:02.185 START TEST env_dpdk_post_init 00:08:02.185 ************************************ 00:08:02.185 13:31:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:02.185 EAL: Detected CPU lcores: 10 00:08:02.185 EAL: Detected NUMA nodes: 1 00:08:02.185 EAL: Detected static linkage of DPDK 00:08:02.185 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:02.185 EAL: Selected IOVA mode 'PA' 00:08:02.185 EAL: VFIO support initialized 00:08:02.185 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:02.185 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:02.444 Starting DPDK initialization... 00:08:02.444 Starting SPDK post initialization... 00:08:02.444 SPDK NVMe probe 00:08:02.444 Attaching to 0000:00:06.0 00:08:02.444 Attached to 0000:00:06.0 00:08:02.444 Cleaning up... 00:08:02.444 ************************************ 00:08:02.444 END TEST env_dpdk_post_init 00:08:02.444 ************************************ 00:08:02.444 00:08:02.444 real 0m0.272s 00:08:02.444 user 0m0.078s 00:08:02.444 sys 0m0.095s 00:08:02.444 13:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.444 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.444 13:31:41 -- env/env.sh@26 -- # uname 00:08:02.444 13:31:41 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:02.444 13:31:41 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:02.444 13:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.444 13:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.444 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.444 ************************************ 00:08:02.444 START TEST env_mem_callbacks 00:08:02.444 ************************************ 00:08:02.444 13:31:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:02.444 EAL: Detected CPU lcores: 10 00:08:02.444 EAL: Detected NUMA nodes: 1 00:08:02.444 EAL: Detected static linkage of DPDK 00:08:02.444 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:02.444 EAL: Selected IOVA mode 'PA' 00:08:02.444 EAL: VFIO support initialized 00:08:02.704 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:02.704 00:08:02.704 00:08:02.704 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.704 http://cunit.sourceforge.net/ 00:08:02.704 00:08:02.704 00:08:02.704 Suite: memory 00:08:02.704 Test: test ... 00:08:02.704 register 0x200000200000 2097152 00:08:02.704 malloc 3145728 00:08:02.704 register 0x200000400000 4194304 00:08:02.704 buf 0x2000004fffc0 len 3145728 PASSED 00:08:02.704 malloc 64 00:08:02.704 buf 0x2000004ffec0 len 64 PASSED 00:08:02.704 malloc 4194304 00:08:02.704 register 0x200000800000 6291456 00:08:02.704 buf 0x2000009fffc0 len 4194304 PASSED 00:08:02.704 free 0x2000004fffc0 3145728 00:08:02.704 free 0x2000004ffec0 64 00:08:02.704 unregister 0x200000400000 4194304 PASSED 00:08:02.704 free 0x2000009fffc0 4194304 00:08:02.704 unregister 0x200000800000 6291456 PASSED 00:08:02.704 malloc 8388608 00:08:02.704 register 0x200000400000 10485760 00:08:02.704 buf 0x2000005fffc0 len 8388608 PASSED 00:08:02.704 free 0x2000005fffc0 8388608 00:08:02.704 unregister 0x200000400000 10485760 PASSED 00:08:02.704 passed 00:08:02.704 00:08:02.704 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.704 suites 1 1 n/a 0 0 00:08:02.704 tests 1 1 1 0 0 00:08:02.704 asserts 15 15 15 0 n/a 00:08:02.704 00:08:02.704 Elapsed time = 0.070 seconds 00:08:02.704 00:08:02.704 real 0m0.299s 00:08:02.704 user 0m0.100s 00:08:02.704 sys 0m0.097s 00:08:02.704 13:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.704 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.704 ************************************ 00:08:02.704 END TEST env_mem_callbacks 00:08:02.704 ************************************ 00:08:02.704 ************************************ 00:08:02.704 END TEST env 00:08:02.704 ************************************ 00:08:02.704 00:08:02.704 real 0m9.341s 00:08:02.704 user 0m7.882s 00:08:02.704 sys 0m1.128s 00:08:02.704 13:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.704 13:31:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.704 13:31:42 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:02.704 13:31:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.704 13:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.704 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:02.704 ************************************ 00:08:02.704 START TEST rpc 00:08:02.704 ************************************ 00:08:02.704 13:31:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:02.963 * Looking for test storage... 00:08:02.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:02.963 13:31:42 -- rpc/rpc.sh@65 -- # spdk_pid=104979 00:08:02.963 13:31:42 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:02.964 13:31:42 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.964 13:31:42 -- rpc/rpc.sh@67 -- # waitforlisten 104979 00:08:02.964 13:31:42 -- common/autotest_common.sh@819 -- # '[' -z 104979 ']' 00:08:02.964 13:31:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.964 13:31:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:02.964 13:31:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.964 13:31:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:02.964 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 [2024-07-10 13:31:42.225547] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:02.964 [2024-07-10 13:31:42.225762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104979 ] 00:08:03.223 [2024-07-10 13:31:42.375086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.223 [2024-07-10 13:31:42.568623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.223 [2024-07-10 13:31:42.568900] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:03.223 [2024-07-10 13:31:42.568958] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104979' to capture a snapshot of events at runtime. 00:08:03.223 [2024-07-10 13:31:42.568998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104979 for offline analysis/debug. 00:08:03.223 [2024-07-10 13:31:42.569084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.599 13:31:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.599 13:31:43 -- common/autotest_common.sh@852 -- # return 0 00:08:04.599 13:31:43 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:04.599 13:31:43 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:04.599 13:31:43 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:04.599 13:31:43 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:04.599 13:31:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.599 13:31:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.599 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.599 ************************************ 00:08:04.599 START TEST rpc_integrity 00:08:04.599 ************************************ 00:08:04.599 13:31:43 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:04.599 13:31:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:04.599 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.599 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.599 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.599 13:31:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:04.599 13:31:43 -- rpc/rpc.sh@13 -- # jq length 00:08:04.599 13:31:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:04.599 13:31:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:04.599 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.599 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.599 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.599 13:31:43 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:04.599 13:31:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:04.599 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.599 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.599 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.599 13:31:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:04.599 { 00:08:04.599 "name": "Malloc0", 00:08:04.599 "aliases": [ 00:08:04.599 "4c335e0f-7674-4e31-8bf9-8669e3d88cd5" 00:08:04.599 ], 00:08:04.599 "product_name": "Malloc disk", 00:08:04.599 "block_size": 512, 00:08:04.599 "num_blocks": 16384, 00:08:04.599 "uuid": "4c335e0f-7674-4e31-8bf9-8669e3d88cd5", 00:08:04.599 "assigned_rate_limits": { 00:08:04.599 "rw_ios_per_sec": 0, 00:08:04.599 "rw_mbytes_per_sec": 0, 00:08:04.599 "r_mbytes_per_sec": 0, 00:08:04.599 "w_mbytes_per_sec": 0 00:08:04.599 }, 00:08:04.599 "claimed": false, 00:08:04.599 "zoned": false, 00:08:04.599 "supported_io_types": { 00:08:04.599 "read": true, 00:08:04.599 "write": true, 00:08:04.599 "unmap": true, 00:08:04.599 "write_zeroes": true, 00:08:04.599 "flush": true, 00:08:04.599 "reset": true, 00:08:04.599 "compare": false, 00:08:04.599 "compare_and_write": false, 00:08:04.599 "abort": true, 00:08:04.599 "nvme_admin": false, 00:08:04.599 "nvme_io": false 00:08:04.599 }, 00:08:04.599 "memory_domains": [ 00:08:04.600 { 00:08:04.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.600 "dma_device_type": 2 00:08:04.600 } 00:08:04.600 ], 00:08:04.600 "driver_specific": {} 00:08:04.600 } 00:08:04.600 ]' 00:08:04.600 13:31:43 -- rpc/rpc.sh@17 -- # jq length 00:08:04.600 13:31:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:04.600 13:31:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:04.600 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.600 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.600 [2024-07-10 13:31:43.843076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:04.600 [2024-07-10 13:31:43.843173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.600 [2024-07-10 13:31:43.843219] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:04.600 [2024-07-10 13:31:43.843252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.600 [2024-07-10 13:31:43.845239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.600 [2024-07-10 13:31:43.845335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:04.600 Passthru0 00:08:04.600 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.600 13:31:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:04.600 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.600 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.600 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.600 13:31:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:04.600 { 00:08:04.600 "name": "Malloc0", 00:08:04.600 "aliases": [ 00:08:04.600 "4c335e0f-7674-4e31-8bf9-8669e3d88cd5" 00:08:04.600 ], 00:08:04.600 "product_name": "Malloc disk", 00:08:04.600 "block_size": 512, 00:08:04.600 "num_blocks": 16384, 00:08:04.600 "uuid": "4c335e0f-7674-4e31-8bf9-8669e3d88cd5", 00:08:04.600 "assigned_rate_limits": { 00:08:04.600 "rw_ios_per_sec": 0, 00:08:04.600 "rw_mbytes_per_sec": 0, 00:08:04.600 "r_mbytes_per_sec": 0, 00:08:04.600 "w_mbytes_per_sec": 0 00:08:04.600 }, 00:08:04.600 "claimed": true, 00:08:04.600 "claim_type": "exclusive_write", 00:08:04.600 "zoned": false, 00:08:04.600 "supported_io_types": { 00:08:04.600 "read": true, 00:08:04.600 "write": true, 00:08:04.600 "unmap": true, 00:08:04.600 "write_zeroes": true, 00:08:04.600 "flush": true, 00:08:04.600 "reset": true, 00:08:04.600 "compare": false, 00:08:04.600 "compare_and_write": false, 00:08:04.600 "abort": true, 00:08:04.600 "nvme_admin": false, 00:08:04.600 "nvme_io": false 00:08:04.600 }, 00:08:04.600 "memory_domains": [ 00:08:04.600 { 00:08:04.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.600 "dma_device_type": 2 00:08:04.600 } 00:08:04.600 ], 00:08:04.600 "driver_specific": {} 00:08:04.600 }, 00:08:04.600 { 00:08:04.600 "name": "Passthru0", 00:08:04.600 "aliases": [ 00:08:04.600 "b2bc5e7e-31c2-5ee1-ae75-a6d6fc814788" 00:08:04.600 ], 00:08:04.600 "product_name": "passthru", 00:08:04.600 "block_size": 512, 00:08:04.600 "num_blocks": 16384, 00:08:04.600 "uuid": "b2bc5e7e-31c2-5ee1-ae75-a6d6fc814788", 00:08:04.600 "assigned_rate_limits": { 00:08:04.600 "rw_ios_per_sec": 0, 00:08:04.600 "rw_mbytes_per_sec": 0, 00:08:04.600 "r_mbytes_per_sec": 0, 00:08:04.600 "w_mbytes_per_sec": 0 00:08:04.600 }, 00:08:04.600 "claimed": false, 00:08:04.600 "zoned": false, 00:08:04.600 "supported_io_types": { 00:08:04.600 "read": true, 00:08:04.600 "write": true, 00:08:04.600 "unmap": true, 00:08:04.600 "write_zeroes": true, 00:08:04.600 "flush": true, 00:08:04.600 "reset": true, 00:08:04.600 "compare": false, 00:08:04.600 "compare_and_write": false, 00:08:04.600 "abort": true, 00:08:04.600 "nvme_admin": false, 00:08:04.600 "nvme_io": false 00:08:04.600 }, 00:08:04.600 "memory_domains": [ 00:08:04.600 { 00:08:04.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.600 "dma_device_type": 2 00:08:04.600 } 00:08:04.600 ], 00:08:04.600 "driver_specific": { 00:08:04.600 "passthru": { 00:08:04.600 "name": "Passthru0", 00:08:04.600 "base_bdev_name": "Malloc0" 00:08:04.600 } 00:08:04.600 } 00:08:04.600 } 00:08:04.600 ]' 00:08:04.600 13:31:43 -- rpc/rpc.sh@21 -- # jq length 00:08:04.600 13:31:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:04.600 13:31:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:04.600 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.600 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.600 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.600 13:31:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:04.600 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.600 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.600 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.859 13:31:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:04.859 13:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.859 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.859 13:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.859 13:31:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:04.859 13:31:43 -- rpc/rpc.sh@26 -- # jq length 00:08:04.859 13:31:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:04.859 00:08:04.859 real 0m0.338s 00:08:04.859 ************************************ 00:08:04.859 END TEST rpc_integrity 00:08:04.859 ************************************ 00:08:04.859 user 0m0.196s 00:08:04.859 sys 0m0.036s 00:08:04.859 13:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.859 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.859 13:31:44 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:04.859 13:31:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:04.859 13:31:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.859 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.859 ************************************ 00:08:04.859 START TEST rpc_plugins 00:08:04.859 ************************************ 00:08:04.859 13:31:44 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:04.859 13:31:44 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:04.859 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.859 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.859 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.859 13:31:44 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:04.859 13:31:44 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:04.859 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.859 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.859 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.859 13:31:44 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:04.859 { 00:08:04.859 "name": "Malloc1", 00:08:04.859 "aliases": [ 00:08:04.859 "bb0037f4-5e79-4673-a874-a0a2efa7c6ec" 00:08:04.859 ], 00:08:04.859 "product_name": "Malloc disk", 00:08:04.859 "block_size": 4096, 00:08:04.859 "num_blocks": 256, 00:08:04.859 "uuid": "bb0037f4-5e79-4673-a874-a0a2efa7c6ec", 00:08:04.859 "assigned_rate_limits": { 00:08:04.859 "rw_ios_per_sec": 0, 00:08:04.860 "rw_mbytes_per_sec": 0, 00:08:04.860 "r_mbytes_per_sec": 0, 00:08:04.860 "w_mbytes_per_sec": 0 00:08:04.860 }, 00:08:04.860 "claimed": false, 00:08:04.860 "zoned": false, 00:08:04.860 "supported_io_types": { 00:08:04.860 "read": true, 00:08:04.860 "write": true, 00:08:04.860 "unmap": true, 00:08:04.860 "write_zeroes": true, 00:08:04.860 "flush": true, 00:08:04.860 "reset": true, 00:08:04.860 "compare": false, 00:08:04.860 "compare_and_write": false, 00:08:04.860 "abort": true, 00:08:04.860 "nvme_admin": false, 00:08:04.860 "nvme_io": false 00:08:04.860 }, 00:08:04.860 "memory_domains": [ 00:08:04.860 { 00:08:04.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.860 "dma_device_type": 2 00:08:04.860 } 00:08:04.860 ], 00:08:04.860 "driver_specific": {} 00:08:04.860 } 00:08:04.860 ]' 00:08:04.860 13:31:44 -- rpc/rpc.sh@32 -- # jq length 00:08:04.860 13:31:44 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:04.860 13:31:44 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:04.860 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.860 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.860 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.860 13:31:44 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:04.860 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.860 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:04.860 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.860 13:31:44 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:04.860 13:31:44 -- rpc/rpc.sh@36 -- # jq length 00:08:05.118 13:31:44 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:05.118 00:08:05.118 real 0m0.160s 00:08:05.118 user 0m0.098s 00:08:05.119 sys 0m0.015s 00:08:05.119 13:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.119 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.119 ************************************ 00:08:05.119 END TEST rpc_plugins 00:08:05.119 ************************************ 00:08:05.119 13:31:44 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:05.119 13:31:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:05.119 13:31:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.119 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.119 ************************************ 00:08:05.119 START TEST rpc_trace_cmd_test 00:08:05.119 ************************************ 00:08:05.119 13:31:44 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:05.119 13:31:44 -- rpc/rpc.sh@40 -- # local info 00:08:05.119 13:31:44 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:05.119 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.119 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.119 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.119 13:31:44 -- rpc/rpc.sh@42 -- # info='{ 00:08:05.119 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104979", 00:08:05.119 "tpoint_group_mask": "0x8", 00:08:05.119 "iscsi_conn": { 00:08:05.119 "mask": "0x2", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "scsi": { 00:08:05.119 "mask": "0x4", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "bdev": { 00:08:05.119 "mask": "0x8", 00:08:05.119 "tpoint_mask": "0xffffffffffffffff" 00:08:05.119 }, 00:08:05.119 "nvmf_rdma": { 00:08:05.119 "mask": "0x10", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "nvmf_tcp": { 00:08:05.119 "mask": "0x20", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "ftl": { 00:08:05.119 "mask": "0x40", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "blobfs": { 00:08:05.119 "mask": "0x80", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "dsa": { 00:08:05.119 "mask": "0x200", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "thread": { 00:08:05.119 "mask": "0x400", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "nvme_pcie": { 00:08:05.119 "mask": "0x800", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "iaa": { 00:08:05.119 "mask": "0x1000", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "nvme_tcp": { 00:08:05.119 "mask": "0x2000", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 }, 00:08:05.119 "bdev_nvme": { 00:08:05.119 "mask": "0x4000", 00:08:05.119 "tpoint_mask": "0x0" 00:08:05.119 } 00:08:05.119 }' 00:08:05.119 13:31:44 -- rpc/rpc.sh@43 -- # jq length 00:08:05.119 13:31:44 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:05.119 13:31:44 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:05.119 13:31:44 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:05.119 13:31:44 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:05.119 13:31:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:05.119 13:31:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:05.408 13:31:44 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:05.408 13:31:44 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:05.408 13:31:44 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:05.408 00:08:05.408 real 0m0.275s 00:08:05.408 user 0m0.239s 00:08:05.408 sys 0m0.028s 00:08:05.408 ************************************ 00:08:05.408 END TEST rpc_trace_cmd_test 00:08:05.408 ************************************ 00:08:05.408 13:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.408 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.408 13:31:44 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:05.408 13:31:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:05.408 13:31:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:05.408 13:31:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:05.408 13:31:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.408 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.408 ************************************ 00:08:05.408 START TEST rpc_daemon_integrity 00:08:05.408 ************************************ 00:08:05.408 13:31:44 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:05.408 13:31:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:05.408 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.408 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.408 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.408 13:31:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:05.408 13:31:44 -- rpc/rpc.sh@13 -- # jq length 00:08:05.408 13:31:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:05.408 13:31:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:05.408 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.408 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.408 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.408 13:31:44 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:05.408 13:31:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:05.408 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.408 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.408 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.408 13:31:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:05.408 { 00:08:05.408 "name": "Malloc2", 00:08:05.408 "aliases": [ 00:08:05.408 "52990117-f5ff-4ad4-8372-320171f78474" 00:08:05.408 ], 00:08:05.408 "product_name": "Malloc disk", 00:08:05.408 "block_size": 512, 00:08:05.408 "num_blocks": 16384, 00:08:05.408 "uuid": "52990117-f5ff-4ad4-8372-320171f78474", 00:08:05.408 "assigned_rate_limits": { 00:08:05.408 "rw_ios_per_sec": 0, 00:08:05.408 "rw_mbytes_per_sec": 0, 00:08:05.408 "r_mbytes_per_sec": 0, 00:08:05.408 "w_mbytes_per_sec": 0 00:08:05.408 }, 00:08:05.408 "claimed": false, 00:08:05.408 "zoned": false, 00:08:05.408 "supported_io_types": { 00:08:05.408 "read": true, 00:08:05.408 "write": true, 00:08:05.408 "unmap": true, 00:08:05.408 "write_zeroes": true, 00:08:05.408 "flush": true, 00:08:05.408 "reset": true, 00:08:05.408 "compare": false, 00:08:05.408 "compare_and_write": false, 00:08:05.408 "abort": true, 00:08:05.408 "nvme_admin": false, 00:08:05.408 "nvme_io": false 00:08:05.408 }, 00:08:05.408 "memory_domains": [ 00:08:05.409 { 00:08:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.409 "dma_device_type": 2 00:08:05.409 } 00:08:05.409 ], 00:08:05.409 "driver_specific": {} 00:08:05.409 } 00:08:05.409 ]' 00:08:05.409 13:31:44 -- rpc/rpc.sh@17 -- # jq length 00:08:05.668 13:31:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:05.668 13:31:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:05.668 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.668 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.668 [2024-07-10 13:31:44.794546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:05.668 [2024-07-10 13:31:44.794647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.668 [2024-07-10 13:31:44.794693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.668 [2024-07-10 13:31:44.794735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.668 [2024-07-10 13:31:44.796814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.668 [2024-07-10 13:31:44.796907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:05.668 Passthru0 00:08:05.668 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.668 13:31:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:05.668 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.668 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.668 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.668 13:31:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:05.668 { 00:08:05.668 "name": "Malloc2", 00:08:05.668 "aliases": [ 00:08:05.668 "52990117-f5ff-4ad4-8372-320171f78474" 00:08:05.668 ], 00:08:05.668 "product_name": "Malloc disk", 00:08:05.668 "block_size": 512, 00:08:05.668 "num_blocks": 16384, 00:08:05.668 "uuid": "52990117-f5ff-4ad4-8372-320171f78474", 00:08:05.668 "assigned_rate_limits": { 00:08:05.668 "rw_ios_per_sec": 0, 00:08:05.668 "rw_mbytes_per_sec": 0, 00:08:05.668 "r_mbytes_per_sec": 0, 00:08:05.668 "w_mbytes_per_sec": 0 00:08:05.668 }, 00:08:05.668 "claimed": true, 00:08:05.668 "claim_type": "exclusive_write", 00:08:05.668 "zoned": false, 00:08:05.668 "supported_io_types": { 00:08:05.668 "read": true, 00:08:05.668 "write": true, 00:08:05.668 "unmap": true, 00:08:05.668 "write_zeroes": true, 00:08:05.668 "flush": true, 00:08:05.668 "reset": true, 00:08:05.668 "compare": false, 00:08:05.668 "compare_and_write": false, 00:08:05.668 "abort": true, 00:08:05.668 "nvme_admin": false, 00:08:05.668 "nvme_io": false 00:08:05.668 }, 00:08:05.668 "memory_domains": [ 00:08:05.668 { 00:08:05.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.668 "dma_device_type": 2 00:08:05.668 } 00:08:05.668 ], 00:08:05.668 "driver_specific": {} 00:08:05.668 }, 00:08:05.668 { 00:08:05.668 "name": "Passthru0", 00:08:05.668 "aliases": [ 00:08:05.668 "73c3ece6-eeef-5491-a7d2-8c35609cbf42" 00:08:05.668 ], 00:08:05.668 "product_name": "passthru", 00:08:05.668 "block_size": 512, 00:08:05.668 "num_blocks": 16384, 00:08:05.668 "uuid": "73c3ece6-eeef-5491-a7d2-8c35609cbf42", 00:08:05.668 "assigned_rate_limits": { 00:08:05.668 "rw_ios_per_sec": 0, 00:08:05.668 "rw_mbytes_per_sec": 0, 00:08:05.668 "r_mbytes_per_sec": 0, 00:08:05.668 "w_mbytes_per_sec": 0 00:08:05.668 }, 00:08:05.668 "claimed": false, 00:08:05.668 "zoned": false, 00:08:05.668 "supported_io_types": { 00:08:05.668 "read": true, 00:08:05.668 "write": true, 00:08:05.668 "unmap": true, 00:08:05.668 "write_zeroes": true, 00:08:05.668 "flush": true, 00:08:05.668 "reset": true, 00:08:05.668 "compare": false, 00:08:05.668 "compare_and_write": false, 00:08:05.668 "abort": true, 00:08:05.668 "nvme_admin": false, 00:08:05.668 "nvme_io": false 00:08:05.668 }, 00:08:05.668 "memory_domains": [ 00:08:05.668 { 00:08:05.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.668 "dma_device_type": 2 00:08:05.668 } 00:08:05.668 ], 00:08:05.668 "driver_specific": { 00:08:05.668 "passthru": { 00:08:05.668 "name": "Passthru0", 00:08:05.668 "base_bdev_name": "Malloc2" 00:08:05.668 } 00:08:05.668 } 00:08:05.668 } 00:08:05.668 ]' 00:08:05.668 13:31:44 -- rpc/rpc.sh@21 -- # jq length 00:08:05.668 13:31:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:05.668 13:31:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:05.668 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.668 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.668 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.668 13:31:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:05.668 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.668 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.668 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.668 13:31:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:05.668 13:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.668 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.668 13:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.668 13:31:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:05.668 13:31:44 -- rpc/rpc.sh@26 -- # jq length 00:08:05.668 13:31:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:05.668 00:08:05.668 real 0m0.351s 00:08:05.668 user 0m0.203s 00:08:05.668 sys 0m0.047s 00:08:05.668 13:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.668 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.668 ************************************ 00:08:05.668 END TEST rpc_daemon_integrity 00:08:05.668 ************************************ 00:08:05.668 13:31:45 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:05.668 13:31:45 -- rpc/rpc.sh@84 -- # killprocess 104979 00:08:05.668 13:31:45 -- common/autotest_common.sh@926 -- # '[' -z 104979 ']' 00:08:05.668 13:31:45 -- common/autotest_common.sh@930 -- # kill -0 104979 00:08:05.668 13:31:45 -- common/autotest_common.sh@931 -- # uname 00:08:05.927 13:31:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:05.927 13:31:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104979 00:08:05.927 13:31:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:05.927 13:31:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:05.927 13:31:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104979' 00:08:05.927 killing process with pid 104979 00:08:05.927 13:31:45 -- common/autotest_common.sh@945 -- # kill 104979 00:08:05.927 13:31:45 -- common/autotest_common.sh@950 -- # wait 104979 00:08:08.462 ************************************ 00:08:08.462 END TEST rpc 00:08:08.462 ************************************ 00:08:08.462 00:08:08.462 real 0m5.226s 00:08:08.462 user 0m6.043s 00:08:08.462 sys 0m0.737s 00:08:08.462 13:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.462 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.462 13:31:47 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:08.462 13:31:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.462 13:31:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.462 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.462 ************************************ 00:08:08.462 START TEST rpc_client 00:08:08.462 ************************************ 00:08:08.462 13:31:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:08.462 * Looking for test storage... 00:08:08.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:08.462 13:31:47 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:08.462 OK 00:08:08.462 13:31:47 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:08.462 ************************************ 00:08:08.462 END TEST rpc_client 00:08:08.462 ************************************ 00:08:08.462 00:08:08.462 real 0m0.199s 00:08:08.462 user 0m0.112s 00:08:08.462 sys 0m0.105s 00:08:08.462 13:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.462 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.462 13:31:47 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:08.462 13:31:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.462 13:31:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.462 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.462 ************************************ 00:08:08.462 START TEST json_config 00:08:08.462 ************************************ 00:08:08.462 13:31:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:08.462 13:31:47 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.462 13:31:47 -- nvmf/common.sh@7 -- # uname -s 00:08:08.462 13:31:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.462 13:31:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.462 13:31:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.462 13:31:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.462 13:31:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.462 13:31:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.462 13:31:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.462 13:31:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.462 13:31:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.462 13:31:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.462 13:31:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d6feb782-7fab-4e2d-bb2c-a6a28bca2f73 00:08:08.462 13:31:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=d6feb782-7fab-4e2d-bb2c-a6a28bca2f73 00:08:08.462 13:31:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.462 13:31:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.462 13:31:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:08.462 13:31:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.462 13:31:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.462 13:31:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.462 13:31:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.462 13:31:47 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:08.462 13:31:47 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:08.462 13:31:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:08.462 13:31:47 -- paths/export.sh@5 -- # export PATH 00:08:08.462 13:31:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:08.462 13:31:47 -- nvmf/common.sh@46 -- # : 0 00:08:08.462 13:31:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:08.462 13:31:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:08.462 13:31:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:08.462 13:31:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.462 13:31:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.462 13:31:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:08.462 13:31:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:08.462 13:31:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:08.462 13:31:47 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:08.462 13:31:47 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:08.462 13:31:47 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:08.462 13:31:47 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:08.462 13:31:47 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:08:08.462 13:31:47 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:08.462 13:31:47 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:08:08.462 13:31:47 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:08.462 13:31:47 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:08:08.462 13:31:47 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:08.462 13:31:47 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:08:08.462 13:31:47 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:08.462 13:31:47 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:08.462 13:31:47 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:08.462 13:31:47 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:08.462 INFO: JSON configuration test init 00:08:08.462 13:31:47 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:08.462 13:31:47 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:08.462 13:31:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:08.462 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.462 13:31:47 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:08.462 13:31:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:08.462 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.462 13:31:47 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:08.463 13:31:47 -- json_config/json_config.sh@98 -- # local app=target 00:08:08.463 13:31:47 -- json_config/json_config.sh@99 -- # shift 00:08:08.463 13:31:47 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:08.463 13:31:47 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:08.463 13:31:47 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:08.463 13:31:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:08.463 13:31:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:08.463 13:31:47 -- json_config/json_config.sh@111 -- # app_pid[$app]=105271 00:08:08.463 13:31:47 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:08.463 Waiting for target to run... 00:08:08.463 13:31:47 -- json_config/json_config.sh@114 -- # waitforlisten 105271 /var/tmp/spdk_tgt.sock 00:08:08.463 13:31:47 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:08.463 13:31:47 -- common/autotest_common.sh@819 -- # '[' -z 105271 ']' 00:08:08.463 13:31:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:08.463 13:31:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:08.463 13:31:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:08.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:08.463 13:31:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:08.463 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:08:08.463 [2024-07-10 13:31:47.771800] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:08.463 [2024-07-10 13:31:47.772028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105271 ] 00:08:09.032 [2024-07-10 13:31:48.161519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.032 [2024-07-10 13:31:48.334087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.032 [2024-07-10 13:31:48.334342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.292 00:08:09.292 13:31:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:09.292 13:31:48 -- common/autotest_common.sh@852 -- # return 0 00:08:09.292 13:31:48 -- json_config/json_config.sh@115 -- # echo '' 00:08:09.292 13:31:48 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:09.292 13:31:48 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:09.292 13:31:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:09.292 13:31:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.292 13:31:48 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:09.292 13:31:48 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:09.292 13:31:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:09.292 13:31:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.292 13:31:48 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:09.292 13:31:48 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:09.292 13:31:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:10.231 13:31:49 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:10.231 13:31:49 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:10.231 13:31:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:10.231 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.231 13:31:49 -- json_config/json_config.sh@48 -- # local ret=0 00:08:10.231 13:31:49 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:08:10.231 13:31:49 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:10.231 13:31:49 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:08:10.231 13:31:49 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:10.231 13:31:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:10.231 13:31:49 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:10.490 13:31:49 -- json_config/json_config.sh@51 -- # local get_types 00:08:10.490 13:31:49 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:10.490 13:31:49 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:10.490 13:31:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:10.490 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.490 13:31:49 -- json_config/json_config.sh@58 -- # return 0 00:08:10.490 13:31:49 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:10.490 13:31:49 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:10.490 13:31:49 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:10.490 13:31:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:10.490 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.490 13:31:49 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:10.490 13:31:49 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:10.490 13:31:49 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:10.490 13:31:49 -- json_config/json_config.sh@164 -- # get_notifications 00:08:10.490 13:31:49 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:10.490 13:31:49 -- json_config/json_config.sh@64 -- # IFS=: 00:08:10.490 13:31:49 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:10.490 13:31:49 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:10.490 13:31:49 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:10.490 13:31:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:10.749 13:31:49 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:10.749 13:31:49 -- json_config/json_config.sh@64 -- # IFS=: 00:08:10.749 13:31:49 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:10.749 13:31:49 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:10.749 13:31:49 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:10.749 13:31:49 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:10.749 13:31:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:10.749 Nvme0n1p0 Nvme0n1p1 00:08:10.749 13:31:50 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:10.749 13:31:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:11.008 [2024-07-10 13:31:50.299874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:11.008 [2024-07-10 13:31:50.300073] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:11.008 00:08:11.008 13:31:50 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:11.008 13:31:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:11.267 Malloc3 00:08:11.267 13:31:50 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:11.267 13:31:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:11.526 [2024-07-10 13:31:50.668166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:11.526 [2024-07-10 13:31:50.668314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.526 [2024-07-10 13:31:50.668374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:11.526 [2024-07-10 13:31:50.668414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.526 [2024-07-10 13:31:50.670364] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.526 [2024-07-10 13:31:50.670453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:11.526 PTBdevFromMalloc3 00:08:11.526 13:31:50 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:11.526 13:31:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:11.526 Null0 00:08:11.526 13:31:50 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:11.526 13:31:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:11.785 Malloc0 00:08:11.785 13:31:51 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:11.785 13:31:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:12.045 Malloc1 00:08:12.045 13:31:51 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:12.045 13:31:51 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:12.304 102400+0 records in 00:08:12.304 102400+0 records out 00:08:12.304 104857600 bytes (105 MB, 100 MiB) copied, 0.183224 s, 572 MB/s 00:08:12.304 13:31:51 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:12.304 13:31:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:12.304 aio_disk 00:08:12.304 13:31:51 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:12.304 13:31:51 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:12.304 13:31:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:12.564 ef03cf14-cff6-4f18-a842-43b52892a034 00:08:12.564 13:31:51 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:12.564 13:31:51 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:12.564 13:31:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:12.824 13:31:51 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:12.825 13:31:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:12.825 13:31:52 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:12.825 13:31:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:13.086 13:31:52 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:13.086 13:31:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:13.345 13:31:52 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:13.345 13:31:52 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:13.345 13:31:52 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:19fffd90-56cb-4aee-9e5a-6caeed181ac2 bdev_register:fb8b97d2-fe31-47f4-b3a6-fa99c5ecae5c bdev_register:00d95687-d770-4d1c-a7cd-56f4e9d60204 bdev_register:c21927b9-b502-4c5a-b5e6-0cde8837d375 00:08:13.345 13:31:52 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:13.345 13:31:52 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:13.346 13:31:52 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:13.346 13:31:52 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:19fffd90-56cb-4aee-9e5a-6caeed181ac2 bdev_register:fb8b97d2-fe31-47f4-b3a6-fa99c5ecae5c bdev_register:00d95687-d770-4d1c-a7cd-56f4e9d60204 bdev_register:c21927b9-b502-4c5a-b5e6-0cde8837d375 00:08:13.346 13:31:52 -- json_config/json_config.sh@74 -- # sort 00:08:13.346 13:31:52 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:13.346 13:31:52 -- json_config/json_config.sh@75 -- # get_notifications 00:08:13.346 13:31:52 -- json_config/json_config.sh@75 -- # sort 00:08:13.346 13:31:52 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:13.346 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.346 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.346 13:31:52 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:13.346 13:31:52 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:13.346 13:31:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:19fffd90-56cb-4aee-9e5a-6caeed181ac2 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:fb8b97d2-fe31-47f4-b3a6-fa99c5ecae5c 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:00d95687-d770-4d1c-a7cd-56f4e9d60204 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@65 -- # echo bdev_register:c21927b9-b502-4c5a-b5e6-0cde8837d375 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # IFS=: 00:08:13.607 13:31:52 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:13.607 13:31:52 -- json_config/json_config.sh@77 -- # [[ bdev_register:00d95687-d770-4d1c-a7cd-56f4e9d60204 bdev_register:19fffd90-56cb-4aee-9e5a-6caeed181ac2 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:c21927b9-b502-4c5a-b5e6-0cde8837d375 bdev_register:fb8b97d2-fe31-47f4-b3a6-fa99c5ecae5c != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\0\d\9\5\6\8\7\-\d\7\7\0\-\4\d\1\c\-\a\7\c\d\-\5\6\f\4\e\9\d\6\0\2\0\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\9\f\f\f\d\9\0\-\5\6\c\b\-\4\a\e\e\-\9\e\5\a\-\6\c\a\e\e\d\1\8\1\a\c\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\2\1\9\2\7\b\9\-\b\5\0\2\-\4\c\5\a\-\b\5\e\6\-\0\c\d\e\8\8\3\7\d\3\7\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\b\8\b\9\7\d\2\-\f\e\3\1\-\4\7\f\4\-\b\3\a\6\-\f\a\9\9\c\5\e\c\a\e\5\c ]] 00:08:13.607 13:31:52 -- json_config/json_config.sh@89 -- # cat 00:08:13.607 13:31:52 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:00d95687-d770-4d1c-a7cd-56f4e9d60204 bdev_register:19fffd90-56cb-4aee-9e5a-6caeed181ac2 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:c21927b9-b502-4c5a-b5e6-0cde8837d375 bdev_register:fb8b97d2-fe31-47f4-b3a6-fa99c5ecae5c 00:08:13.607 Expected events matched: 00:08:13.607 bdev_register:00d95687-d770-4d1c-a7cd-56f4e9d60204 00:08:13.607 bdev_register:19fffd90-56cb-4aee-9e5a-6caeed181ac2 00:08:13.607 bdev_register:Malloc0 00:08:13.607 bdev_register:Malloc0p0 00:08:13.607 bdev_register:Malloc0p1 00:08:13.607 bdev_register:Malloc0p2 00:08:13.607 bdev_register:Malloc1 00:08:13.607 bdev_register:Malloc3 00:08:13.607 bdev_register:Null0 00:08:13.607 bdev_register:Nvme0n1 00:08:13.607 bdev_register:Nvme0n1p0 00:08:13.607 bdev_register:Nvme0n1p1 00:08:13.607 bdev_register:PTBdevFromMalloc3 00:08:13.607 bdev_register:aio_disk 00:08:13.607 bdev_register:c21927b9-b502-4c5a-b5e6-0cde8837d375 00:08:13.607 bdev_register:fb8b97d2-fe31-47f4-b3a6-fa99c5ecae5c 00:08:13.607 13:31:52 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:13.607 13:31:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:13.607 13:31:52 -- common/autotest_common.sh@10 -- # set +x 00:08:13.607 13:31:52 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:13.607 13:31:52 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:13.607 13:31:52 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:13.607 13:31:52 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:13.607 13:31:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:13.607 13:31:52 -- common/autotest_common.sh@10 -- # set +x 00:08:13.607 13:31:52 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:13.607 13:31:52 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:13.607 13:31:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:13.865 MallocBdevForConfigChangeCheck 00:08:13.865 13:31:53 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:13.865 13:31:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:13.865 13:31:53 -- common/autotest_common.sh@10 -- # set +x 00:08:13.865 13:31:53 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:13.865 13:31:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:14.124 INFO: shutting down applications... 00:08:14.124 13:31:53 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:14.124 13:31:53 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:14.124 13:31:53 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:14.124 13:31:53 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:14.124 13:31:53 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:14.383 [2024-07-10 13:31:53.524709] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:14.383 Calling clear_vhost_scsi_subsystem 00:08:14.383 Calling clear_iscsi_subsystem 00:08:14.383 Calling clear_vhost_blk_subsystem 00:08:14.383 Calling clear_nbd_subsystem 00:08:14.383 Calling clear_nvmf_subsystem 00:08:14.383 Calling clear_bdev_subsystem 00:08:14.383 Calling clear_accel_subsystem 00:08:14.383 Calling clear_iobuf_subsystem 00:08:14.383 Calling clear_sock_subsystem 00:08:14.383 Calling clear_vmd_subsystem 00:08:14.383 Calling clear_scheduler_subsystem 00:08:14.383 13:31:53 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:14.383 13:31:53 -- json_config/json_config.sh@396 -- # count=100 00:08:14.383 13:31:53 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:14.383 13:31:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:14.383 13:31:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:14.383 13:31:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:14.952 13:31:54 -- json_config/json_config.sh@398 -- # break 00:08:14.952 13:31:54 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:14.952 13:31:54 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:14.952 13:31:54 -- json_config/json_config.sh@120 -- # local app=target 00:08:14.952 13:31:54 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:14.952 13:31:54 -- json_config/json_config.sh@124 -- # [[ -n 105271 ]] 00:08:14.952 13:31:54 -- json_config/json_config.sh@127 -- # kill -SIGINT 105271 00:08:14.952 13:31:54 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:14.952 13:31:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:14.952 13:31:54 -- json_config/json_config.sh@130 -- # kill -0 105271 00:08:14.952 13:31:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:15.212 13:31:54 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:15.212 13:31:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:15.212 13:31:54 -- json_config/json_config.sh@130 -- # kill -0 105271 00:08:15.212 13:31:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:15.781 13:31:55 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:15.781 13:31:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:15.781 13:31:55 -- json_config/json_config.sh@130 -- # kill -0 105271 00:08:15.781 13:31:55 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:15.781 13:31:55 -- json_config/json_config.sh@132 -- # break 00:08:15.781 13:31:55 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:15.781 13:31:55 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:15.781 SPDK target shutdown done 00:08:15.781 13:31:55 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:15.781 INFO: relaunching applications... 00:08:15.781 13:31:55 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:15.781 13:31:55 -- json_config/json_config.sh@98 -- # local app=target 00:08:15.781 13:31:55 -- json_config/json_config.sh@99 -- # shift 00:08:15.781 13:31:55 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:15.781 13:31:55 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:15.781 13:31:55 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:15.782 13:31:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:15.782 13:31:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:15.782 13:31:55 -- json_config/json_config.sh@111 -- # app_pid[$app]=105542 00:08:15.782 13:31:55 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:15.782 Waiting for target to run... 00:08:15.782 13:31:55 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:15.782 13:31:55 -- json_config/json_config.sh@114 -- # waitforlisten 105542 /var/tmp/spdk_tgt.sock 00:08:15.782 13:31:55 -- common/autotest_common.sh@819 -- # '[' -z 105542 ']' 00:08:15.782 13:31:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:15.782 13:31:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:15.782 13:31:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:15.782 13:31:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:15.782 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:08:15.782 [2024-07-10 13:31:55.106642] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:15.782 [2024-07-10 13:31:55.106866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105542 ] 00:08:16.373 [2024-07-10 13:31:55.494857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.373 [2024-07-10 13:31:55.670329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.373 [2024-07-10 13:31:55.670626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.334 [2024-07-10 13:31:56.356379] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:17.334 [2024-07-10 13:31:56.356547] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:17.334 [2024-07-10 13:31:56.364338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:17.334 [2024-07-10 13:31:56.364411] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:17.334 [2024-07-10 13:31:56.372349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:17.334 [2024-07-10 13:31:56.372423] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:17.334 [2024-07-10 13:31:56.372459] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:17.334 [2024-07-10 13:31:56.464150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:17.334 [2024-07-10 13:31:56.464285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.334 [2024-07-10 13:31:56.464331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:17.334 [2024-07-10 13:31:56.464368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.334 [2024-07-10 13:31:56.464816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.334 [2024-07-10 13:31:56.464887] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:18.270 13:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.270 13:31:57 -- common/autotest_common.sh@852 -- # return 0 00:08:18.270 13:31:57 -- json_config/json_config.sh@115 -- # echo '' 00:08:18.270 00:08:18.270 13:31:57 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:18.270 13:31:57 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:18.270 INFO: Checking if target configuration is the same... 00:08:18.271 13:31:57 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:18.271 13:31:57 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:18.271 13:31:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:18.271 + '[' 2 -ne 2 ']' 00:08:18.271 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:18.271 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:18.271 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:18.271 +++ basename /dev/fd/62 00:08:18.271 ++ mktemp /tmp/62.XXX 00:08:18.271 + tmp_file_1=/tmp/62.yNF 00:08:18.271 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:18.271 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:18.271 + tmp_file_2=/tmp/spdk_tgt_config.json.3Xf 00:08:18.271 + ret=0 00:08:18.271 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:18.271 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:18.271 + diff -u /tmp/62.yNF /tmp/spdk_tgt_config.json.3Xf 00:08:18.271 + echo 'INFO: JSON config files are the same' 00:08:18.271 INFO: JSON config files are the same 00:08:18.271 + rm /tmp/62.yNF /tmp/spdk_tgt_config.json.3Xf 00:08:18.271 + exit 0 00:08:18.529 13:31:57 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:18.529 13:31:57 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:18.529 INFO: changing configuration and checking if this can be detected... 00:08:18.529 13:31:57 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:18.529 13:31:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:18.529 13:31:57 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:18.529 13:31:57 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:18.529 13:31:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:18.529 + '[' 2 -ne 2 ']' 00:08:18.529 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:18.529 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:18.529 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:18.529 +++ basename /dev/fd/62 00:08:18.529 ++ mktemp /tmp/62.XXX 00:08:18.529 + tmp_file_1=/tmp/62.hdb 00:08:18.529 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:18.529 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:18.529 + tmp_file_2=/tmp/spdk_tgt_config.json.LRc 00:08:18.529 + ret=0 00:08:18.529 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:19.097 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:19.097 + diff -u /tmp/62.hdb /tmp/spdk_tgt_config.json.LRc 00:08:19.097 + ret=1 00:08:19.097 + echo '=== Start of file: /tmp/62.hdb ===' 00:08:19.097 + cat /tmp/62.hdb 00:08:19.097 + echo '=== End of file: /tmp/62.hdb ===' 00:08:19.097 + echo '' 00:08:19.097 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LRc ===' 00:08:19.097 + cat /tmp/spdk_tgt_config.json.LRc 00:08:19.097 + echo '=== End of file: /tmp/spdk_tgt_config.json.LRc ===' 00:08:19.097 + echo '' 00:08:19.097 + rm /tmp/62.hdb /tmp/spdk_tgt_config.json.LRc 00:08:19.097 + exit 1 00:08:19.097 13:31:58 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:19.097 INFO: configuration change detected. 00:08:19.097 13:31:58 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:19.097 13:31:58 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:19.097 13:31:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:19.097 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 13:31:58 -- json_config/json_config.sh@360 -- # local ret=0 00:08:19.097 13:31:58 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:19.097 13:31:58 -- json_config/json_config.sh@370 -- # [[ -n 105542 ]] 00:08:19.097 13:31:58 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:19.097 13:31:58 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:19.097 13:31:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:19.097 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 13:31:58 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:19.097 13:31:58 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:19.097 13:31:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:19.097 13:31:58 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:19.097 13:31:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:19.356 13:31:58 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:19.356 13:31:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:19.615 13:31:58 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:19.615 13:31:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:19.615 13:31:58 -- json_config/json_config.sh@246 -- # uname -s 00:08:19.615 13:31:58 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:19.615 13:31:58 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:19.615 13:31:58 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:19.615 13:31:58 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:19.615 13:31:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:19.615 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:08:19.873 13:31:58 -- json_config/json_config.sh@376 -- # killprocess 105542 00:08:19.873 13:31:58 -- common/autotest_common.sh@926 -- # '[' -z 105542 ']' 00:08:19.873 13:31:58 -- common/autotest_common.sh@930 -- # kill -0 105542 00:08:19.873 13:31:58 -- common/autotest_common.sh@931 -- # uname 00:08:19.873 13:31:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:19.873 13:31:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105542 00:08:19.873 13:31:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:19.873 13:31:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:19.873 13:31:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105542' 00:08:19.873 killing process with pid 105542 00:08:19.873 13:31:59 -- common/autotest_common.sh@945 -- # kill 105542 00:08:19.873 13:31:59 -- common/autotest_common.sh@950 -- # wait 105542 00:08:20.810 13:31:59 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:20.810 13:31:59 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:20.810 13:31:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:20.810 13:31:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.810 INFO: Success 00:08:20.810 13:32:00 -- json_config/json_config.sh@381 -- # return 0 00:08:20.810 13:32:00 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:20.810 ************************************ 00:08:20.810 END TEST json_config 00:08:20.810 ************************************ 00:08:20.810 00:08:20.810 real 0m12.422s 00:08:20.810 user 0m17.038s 00:08:20.810 sys 0m2.256s 00:08:20.810 13:32:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.810 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:08:20.810 13:32:00 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:20.810 13:32:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.810 13:32:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.810 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:08:20.810 ************************************ 00:08:20.811 START TEST json_config_extra_key 00:08:20.811 ************************************ 00:08:20.811 13:32:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:20.811 13:32:00 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.811 13:32:00 -- nvmf/common.sh@7 -- # uname -s 00:08:20.811 13:32:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.811 13:32:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.811 13:32:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.811 13:32:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.811 13:32:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.811 13:32:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.811 13:32:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.811 13:32:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.811 13:32:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.811 13:32:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.811 13:32:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e72533dc-efc1-48fb-9b7b-b9326d8e470c 00:08:20.811 13:32:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=e72533dc-efc1-48fb-9b7b-b9326d8e470c 00:08:20.811 13:32:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.811 13:32:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.811 13:32:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:20.811 13:32:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.811 13:32:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.811 13:32:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.811 13:32:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.811 13:32:00 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.811 13:32:00 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.811 13:32:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.811 13:32:00 -- paths/export.sh@5 -- # export PATH 00:08:20.811 13:32:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.811 13:32:00 -- nvmf/common.sh@46 -- # : 0 00:08:20.811 13:32:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:20.811 13:32:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:20.811 13:32:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:20.811 13:32:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.811 13:32:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.811 13:32:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:20.811 13:32:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:20.811 13:32:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:21.070 INFO: launching applications... 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=105728 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:21.070 Waiting for target to run... 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 105728 /var/tmp/spdk_tgt.sock 00:08:21.070 13:32:00 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:21.070 13:32:00 -- common/autotest_common.sh@819 -- # '[' -z 105728 ']' 00:08:21.070 13:32:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:21.070 13:32:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:21.070 13:32:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:21.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:21.070 13:32:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:21.070 13:32:00 -- common/autotest_common.sh@10 -- # set +x 00:08:21.070 [2024-07-10 13:32:00.243800] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:21.070 [2024-07-10 13:32:00.244013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105728 ] 00:08:21.330 [2024-07-10 13:32:00.618807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.590 [2024-07-10 13:32:00.797625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.590 [2024-07-10 13:32:00.797899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.529 00:08:22.529 INFO: shutting down applications... 00:08:22.529 13:32:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:22.529 13:32:01 -- common/autotest_common.sh@852 -- # return 0 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 105728 ]] 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 105728 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:22.529 13:32:01 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:23.121 13:32:02 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:23.121 13:32:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:23.121 13:32:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:23.121 13:32:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:23.691 13:32:02 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:23.691 13:32:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:23.691 13:32:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:23.691 13:32:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:23.950 13:32:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:23.950 13:32:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:23.950 13:32:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:23.950 13:32:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:24.516 13:32:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:24.516 13:32:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:24.516 13:32:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:24.516 13:32:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:25.084 13:32:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:25.084 13:32:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:25.084 13:32:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:25.084 13:32:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:25.652 SPDK target shutdown done 00:08:25.652 Success 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105728 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:25.652 13:32:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:25.652 ************************************ 00:08:25.652 END TEST json_config_extra_key 00:08:25.652 ************************************ 00:08:25.652 00:08:25.652 real 0m4.686s 00:08:25.652 user 0m4.150s 00:08:25.652 sys 0m0.503s 00:08:25.652 13:32:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.652 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.652 13:32:04 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:25.652 13:32:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.652 13:32:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.652 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.652 ************************************ 00:08:25.652 START TEST alias_rpc 00:08:25.652 ************************************ 00:08:25.652 13:32:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:25.652 * Looking for test storage... 00:08:25.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:25.652 13:32:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:25.652 13:32:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=105854 00:08:25.652 13:32:04 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.652 13:32:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 105854 00:08:25.652 13:32:04 -- common/autotest_common.sh@819 -- # '[' -z 105854 ']' 00:08:25.652 13:32:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.652 13:32:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.652 13:32:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.652 13:32:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.652 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.652 [2024-07-10 13:32:04.998456] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:25.652 [2024-07-10 13:32:04.998680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105854 ] 00:08:25.911 [2024-07-10 13:32:05.153998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.170 [2024-07-10 13:32:05.352133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.170 [2024-07-10 13:32:05.352426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.549 13:32:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.549 13:32:06 -- common/autotest_common.sh@852 -- # return 0 00:08:27.549 13:32:06 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:27.549 13:32:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 105854 00:08:27.549 13:32:06 -- common/autotest_common.sh@926 -- # '[' -z 105854 ']' 00:08:27.549 13:32:06 -- common/autotest_common.sh@930 -- # kill -0 105854 00:08:27.549 13:32:06 -- common/autotest_common.sh@931 -- # uname 00:08:27.549 13:32:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:27.549 13:32:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105854 00:08:27.549 13:32:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:27.549 13:32:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:27.549 13:32:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105854' 00:08:27.549 killing process with pid 105854 00:08:27.549 13:32:06 -- common/autotest_common.sh@945 -- # kill 105854 00:08:27.549 13:32:06 -- common/autotest_common.sh@950 -- # wait 105854 00:08:30.092 ************************************ 00:08:30.092 END TEST alias_rpc 00:08:30.092 ************************************ 00:08:30.092 00:08:30.092 real 0m4.156s 00:08:30.092 user 0m4.371s 00:08:30.092 sys 0m0.438s 00:08:30.092 13:32:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.092 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:08:30.092 13:32:09 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:30.092 13:32:09 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:30.092 13:32:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:30.092 13:32:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.092 13:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:30.092 ************************************ 00:08:30.092 START TEST spdkcli_tcp 00:08:30.092 ************************************ 00:08:30.092 13:32:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:30.092 * Looking for test storage... 00:08:30.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:30.092 13:32:09 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:30.092 13:32:09 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:30.092 13:32:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:30.092 13:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=105965 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@27 -- # waitforlisten 105965 00:08:30.092 13:32:09 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:30.092 13:32:09 -- common/autotest_common.sh@819 -- # '[' -z 105965 ']' 00:08:30.092 13:32:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.092 13:32:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:30.092 13:32:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.092 13:32:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:30.092 13:32:09 -- common/autotest_common.sh@10 -- # set +x 00:08:30.092 [2024-07-10 13:32:09.228450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:30.092 [2024-07-10 13:32:09.228683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105965 ] 00:08:30.092 [2024-07-10 13:32:09.375631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:30.350 [2024-07-10 13:32:09.566103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.350 [2024-07-10 13:32:09.566559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.350 [2024-07-10 13:32:09.566566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.725 13:32:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:31.725 13:32:10 -- common/autotest_common.sh@852 -- # return 0 00:08:31.725 13:32:10 -- spdkcli/tcp.sh@31 -- # socat_pid=106001 00:08:31.725 13:32:10 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:31.725 13:32:10 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:31.725 [ 00:08:31.725 "spdk_get_version", 00:08:31.725 "rpc_get_methods", 00:08:31.725 "trace_get_info", 00:08:31.725 "trace_get_tpoint_group_mask", 00:08:31.725 "trace_disable_tpoint_group", 00:08:31.725 "trace_enable_tpoint_group", 00:08:31.725 "trace_clear_tpoint_mask", 00:08:31.725 "trace_set_tpoint_mask", 00:08:31.725 "framework_get_pci_devices", 00:08:31.725 "framework_get_config", 00:08:31.725 "framework_get_subsystems", 00:08:31.725 "iobuf_get_stats", 00:08:31.725 "iobuf_set_options", 00:08:31.725 "sock_set_default_impl", 00:08:31.725 "sock_impl_set_options", 00:08:31.725 "sock_impl_get_options", 00:08:31.725 "vmd_rescan", 00:08:31.725 "vmd_remove_device", 00:08:31.725 "vmd_enable", 00:08:31.725 "accel_get_stats", 00:08:31.725 "accel_set_options", 00:08:31.725 "accel_set_driver", 00:08:31.725 "accel_crypto_key_destroy", 00:08:31.725 "accel_crypto_keys_get", 00:08:31.725 "accel_crypto_key_create", 00:08:31.725 "accel_assign_opc", 00:08:31.725 "accel_get_module_info", 00:08:31.725 "accel_get_opc_assignments", 00:08:31.725 "notify_get_notifications", 00:08:31.725 "notify_get_types", 00:08:31.725 "bdev_get_histogram", 00:08:31.725 "bdev_enable_histogram", 00:08:31.725 "bdev_set_qos_limit", 00:08:31.725 "bdev_set_qd_sampling_period", 00:08:31.725 "bdev_get_bdevs", 00:08:31.725 "bdev_reset_iostat", 00:08:31.725 "bdev_get_iostat", 00:08:31.725 "bdev_examine", 00:08:31.725 "bdev_wait_for_examine", 00:08:31.725 "bdev_set_options", 00:08:31.725 "scsi_get_devices", 00:08:31.725 "thread_set_cpumask", 00:08:31.725 "framework_get_scheduler", 00:08:31.725 "framework_set_scheduler", 00:08:31.725 "framework_get_reactors", 00:08:31.725 "thread_get_io_channels", 00:08:31.725 "thread_get_pollers", 00:08:31.725 "thread_get_stats", 00:08:31.725 "framework_monitor_context_switch", 00:08:31.725 "spdk_kill_instance", 00:08:31.725 "log_enable_timestamps", 00:08:31.725 "log_get_flags", 00:08:31.725 "log_clear_flag", 00:08:31.725 "log_set_flag", 00:08:31.725 "log_get_level", 00:08:31.725 "log_set_level", 00:08:31.725 "log_get_print_level", 00:08:31.725 "log_set_print_level", 00:08:31.725 "framework_enable_cpumask_locks", 00:08:31.725 "framework_disable_cpumask_locks", 00:08:31.725 "framework_wait_init", 00:08:31.725 "framework_start_init", 00:08:31.725 "virtio_blk_create_transport", 00:08:31.725 "virtio_blk_get_transports", 00:08:31.725 "vhost_controller_set_coalescing", 00:08:31.725 "vhost_get_controllers", 00:08:31.725 "vhost_delete_controller", 00:08:31.725 "vhost_create_blk_controller", 00:08:31.725 "vhost_scsi_controller_remove_target", 00:08:31.725 "vhost_scsi_controller_add_target", 00:08:31.725 "vhost_start_scsi_controller", 00:08:31.725 "vhost_create_scsi_controller", 00:08:31.725 "nbd_get_disks", 00:08:31.725 "nbd_stop_disk", 00:08:31.725 "nbd_start_disk", 00:08:31.725 "env_dpdk_get_mem_stats", 00:08:31.725 "nvmf_subsystem_get_listeners", 00:08:31.725 "nvmf_subsystem_get_qpairs", 00:08:31.725 "nvmf_subsystem_get_controllers", 00:08:31.725 "nvmf_get_stats", 00:08:31.725 "nvmf_get_transports", 00:08:31.725 "nvmf_create_transport", 00:08:31.725 "nvmf_get_targets", 00:08:31.725 "nvmf_delete_target", 00:08:31.725 "nvmf_create_target", 00:08:31.725 "nvmf_subsystem_allow_any_host", 00:08:31.725 "nvmf_subsystem_remove_host", 00:08:31.725 "nvmf_subsystem_add_host", 00:08:31.725 "nvmf_subsystem_remove_ns", 00:08:31.725 "nvmf_subsystem_add_ns", 00:08:31.725 "nvmf_subsystem_listener_set_ana_state", 00:08:31.725 "nvmf_discovery_get_referrals", 00:08:31.725 "nvmf_discovery_remove_referral", 00:08:31.725 "nvmf_discovery_add_referral", 00:08:31.725 "nvmf_subsystem_remove_listener", 00:08:31.725 "nvmf_subsystem_add_listener", 00:08:31.725 "nvmf_delete_subsystem", 00:08:31.725 "nvmf_create_subsystem", 00:08:31.725 "nvmf_get_subsystems", 00:08:31.725 "nvmf_set_crdt", 00:08:31.725 "nvmf_set_config", 00:08:31.725 "nvmf_set_max_subsystems", 00:08:31.725 "iscsi_set_options", 00:08:31.725 "iscsi_get_auth_groups", 00:08:31.725 "iscsi_auth_group_remove_secret", 00:08:31.725 "iscsi_auth_group_add_secret", 00:08:31.725 "iscsi_delete_auth_group", 00:08:31.725 "iscsi_create_auth_group", 00:08:31.725 "iscsi_set_discovery_auth", 00:08:31.725 "iscsi_get_options", 00:08:31.725 "iscsi_target_node_request_logout", 00:08:31.725 "iscsi_target_node_set_redirect", 00:08:31.725 "iscsi_target_node_set_auth", 00:08:31.725 "iscsi_target_node_add_lun", 00:08:31.725 "iscsi_get_connections", 00:08:31.725 "iscsi_portal_group_set_auth", 00:08:31.725 "iscsi_start_portal_group", 00:08:31.725 "iscsi_delete_portal_group", 00:08:31.725 "iscsi_create_portal_group", 00:08:31.726 "iscsi_get_portal_groups", 00:08:31.726 "iscsi_delete_target_node", 00:08:31.726 "iscsi_target_node_remove_pg_ig_maps", 00:08:31.726 "iscsi_target_node_add_pg_ig_maps", 00:08:31.726 "iscsi_create_target_node", 00:08:31.726 "iscsi_get_target_nodes", 00:08:31.726 "iscsi_delete_initiator_group", 00:08:31.726 "iscsi_initiator_group_remove_initiators", 00:08:31.726 "iscsi_initiator_group_add_initiators", 00:08:31.726 "iscsi_create_initiator_group", 00:08:31.726 "iscsi_get_initiator_groups", 00:08:31.726 "iaa_scan_accel_module", 00:08:31.726 "dsa_scan_accel_module", 00:08:31.726 "ioat_scan_accel_module", 00:08:31.726 "accel_error_inject_error", 00:08:31.726 "bdev_iscsi_delete", 00:08:31.726 "bdev_iscsi_create", 00:08:31.726 "bdev_iscsi_set_options", 00:08:31.726 "bdev_virtio_attach_controller", 00:08:31.726 "bdev_virtio_scsi_get_devices", 00:08:31.726 "bdev_virtio_detach_controller", 00:08:31.726 "bdev_virtio_blk_set_hotplug", 00:08:31.726 "bdev_ftl_set_property", 00:08:31.726 "bdev_ftl_get_properties", 00:08:31.726 "bdev_ftl_get_stats", 00:08:31.726 "bdev_ftl_unmap", 00:08:31.726 "bdev_ftl_unload", 00:08:31.726 "bdev_ftl_delete", 00:08:31.726 "bdev_ftl_load", 00:08:31.726 "bdev_ftl_create", 00:08:31.726 "bdev_aio_delete", 00:08:31.726 "bdev_aio_rescan", 00:08:31.726 "bdev_aio_create", 00:08:31.726 "blobfs_create", 00:08:31.726 "blobfs_detect", 00:08:31.726 "blobfs_set_cache_size", 00:08:31.726 "bdev_zone_block_delete", 00:08:31.726 "bdev_zone_block_create", 00:08:31.726 "bdev_delay_delete", 00:08:31.726 "bdev_delay_create", 00:08:31.726 "bdev_delay_update_latency", 00:08:31.726 "bdev_split_delete", 00:08:31.726 "bdev_split_create", 00:08:31.726 "bdev_error_inject_error", 00:08:31.726 "bdev_error_delete", 00:08:31.726 "bdev_error_create", 00:08:31.726 "bdev_raid_set_options", 00:08:31.726 "bdev_raid_remove_base_bdev", 00:08:31.726 "bdev_raid_add_base_bdev", 00:08:31.726 "bdev_raid_delete", 00:08:31.726 "bdev_raid_create", 00:08:31.726 "bdev_raid_get_bdevs", 00:08:31.726 "bdev_lvol_grow_lvstore", 00:08:31.726 "bdev_lvol_get_lvols", 00:08:31.726 "bdev_lvol_get_lvstores", 00:08:31.726 "bdev_lvol_delete", 00:08:31.726 "bdev_lvol_set_read_only", 00:08:31.726 "bdev_lvol_resize", 00:08:31.726 "bdev_lvol_decouple_parent", 00:08:31.726 "bdev_lvol_inflate", 00:08:31.726 "bdev_lvol_rename", 00:08:31.726 "bdev_lvol_clone_bdev", 00:08:31.726 "bdev_lvol_clone", 00:08:31.726 "bdev_lvol_snapshot", 00:08:31.726 "bdev_lvol_create", 00:08:31.726 "bdev_lvol_delete_lvstore", 00:08:31.726 "bdev_lvol_rename_lvstore", 00:08:31.726 "bdev_lvol_create_lvstore", 00:08:31.726 "bdev_passthru_delete", 00:08:31.726 "bdev_passthru_create", 00:08:31.726 "bdev_nvme_cuse_unregister", 00:08:31.726 "bdev_nvme_cuse_register", 00:08:31.726 "bdev_opal_new_user", 00:08:31.726 "bdev_opal_set_lock_state", 00:08:31.726 "bdev_opal_delete", 00:08:31.726 "bdev_opal_get_info", 00:08:31.726 "bdev_opal_create", 00:08:31.726 "bdev_nvme_opal_revert", 00:08:31.726 "bdev_nvme_opal_init", 00:08:31.726 "bdev_nvme_send_cmd", 00:08:31.726 "bdev_nvme_get_path_iostat", 00:08:31.726 "bdev_nvme_get_mdns_discovery_info", 00:08:31.726 "bdev_nvme_stop_mdns_discovery", 00:08:31.726 "bdev_nvme_start_mdns_discovery", 00:08:31.726 "bdev_nvme_set_multipath_policy", 00:08:31.726 "bdev_nvme_set_preferred_path", 00:08:31.726 "bdev_nvme_get_io_paths", 00:08:31.726 "bdev_nvme_remove_error_injection", 00:08:31.726 "bdev_nvme_add_error_injection", 00:08:31.726 "bdev_nvme_get_discovery_info", 00:08:31.726 "bdev_nvme_stop_discovery", 00:08:31.726 "bdev_nvme_start_discovery", 00:08:31.726 "bdev_nvme_get_controller_health_info", 00:08:31.726 "bdev_nvme_disable_controller", 00:08:31.726 "bdev_nvme_enable_controller", 00:08:31.726 "bdev_nvme_reset_controller", 00:08:31.726 "bdev_nvme_get_transport_statistics", 00:08:31.726 "bdev_nvme_apply_firmware", 00:08:31.726 "bdev_nvme_detach_controller", 00:08:31.726 "bdev_nvme_get_controllers", 00:08:31.726 "bdev_nvme_attach_controller", 00:08:31.726 "bdev_nvme_set_hotplug", 00:08:31.726 "bdev_nvme_set_options", 00:08:31.726 "bdev_null_resize", 00:08:31.726 "bdev_null_delete", 00:08:31.726 "bdev_null_create", 00:08:31.726 "bdev_malloc_delete", 00:08:31.726 "bdev_malloc_create" 00:08:31.726 ] 00:08:31.726 13:32:10 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:31.726 13:32:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:31.726 13:32:10 -- common/autotest_common.sh@10 -- # set +x 00:08:31.726 13:32:10 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:31.726 13:32:10 -- spdkcli/tcp.sh@38 -- # killprocess 105965 00:08:31.726 13:32:10 -- common/autotest_common.sh@926 -- # '[' -z 105965 ']' 00:08:31.726 13:32:10 -- common/autotest_common.sh@930 -- # kill -0 105965 00:08:31.726 13:32:10 -- common/autotest_common.sh@931 -- # uname 00:08:31.726 13:32:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:31.726 13:32:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105965 00:08:31.726 killing process with pid 105965 00:08:31.726 13:32:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:31.726 13:32:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:31.726 13:32:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105965' 00:08:31.726 13:32:10 -- common/autotest_common.sh@945 -- # kill 105965 00:08:31.726 13:32:10 -- common/autotest_common.sh@950 -- # wait 105965 00:08:34.261 ************************************ 00:08:34.261 END TEST spdkcli_tcp 00:08:34.261 ************************************ 00:08:34.261 00:08:34.261 real 0m4.167s 00:08:34.261 user 0m7.586s 00:08:34.261 sys 0m0.504s 00:08:34.261 13:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.261 13:32:13 -- common/autotest_common.sh@10 -- # set +x 00:08:34.261 13:32:13 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:34.261 13:32:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:34.261 13:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.261 13:32:13 -- common/autotest_common.sh@10 -- # set +x 00:08:34.261 ************************************ 00:08:34.261 START TEST dpdk_mem_utility 00:08:34.261 ************************************ 00:08:34.261 13:32:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:34.261 * Looking for test storage... 00:08:34.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:34.261 13:32:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:34.261 13:32:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=106107 00:08:34.261 13:32:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.261 13:32:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 106107 00:08:34.261 13:32:13 -- common/autotest_common.sh@819 -- # '[' -z 106107 ']' 00:08:34.261 13:32:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.261 13:32:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:34.261 13:32:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.261 13:32:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:34.261 13:32:13 -- common/autotest_common.sh@10 -- # set +x 00:08:34.261 [2024-07-10 13:32:13.438834] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:34.261 [2024-07-10 13:32:13.439046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106107 ] 00:08:34.261 [2024-07-10 13:32:13.600071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.521 [2024-07-10 13:32:13.793094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:34.521 [2024-07-10 13:32:13.793359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.897 13:32:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:35.897 13:32:14 -- common/autotest_common.sh@852 -- # return 0 00:08:35.897 13:32:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:35.897 13:32:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:35.897 13:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.897 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:08:35.897 { 00:08:35.897 "filename": "/tmp/spdk_mem_dump.txt" 00:08:35.897 } 00:08:35.897 13:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:35.897 13:32:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:35.897 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:35.897 1 heaps totaling size 820.000000 MiB 00:08:35.897 size: 820.000000 MiB heap id: 0 00:08:35.897 end heaps---------- 00:08:35.897 8 mempools totaling size 598.116089 MiB 00:08:35.897 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:35.897 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:35.897 size: 84.521057 MiB name: bdev_io_106107 00:08:35.897 size: 51.011292 MiB name: evtpool_106107 00:08:35.897 size: 50.003479 MiB name: msgpool_106107 00:08:35.897 size: 21.763794 MiB name: PDU_Pool 00:08:35.897 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:35.897 size: 0.026123 MiB name: Session_Pool 00:08:35.897 end mempools------- 00:08:35.897 6 memzones totaling size 4.142822 MiB 00:08:35.897 size: 1.000366 MiB name: RG_ring_0_106107 00:08:35.897 size: 1.000366 MiB name: RG_ring_1_106107 00:08:35.897 size: 1.000366 MiB name: RG_ring_4_106107 00:08:35.897 size: 1.000366 MiB name: RG_ring_5_106107 00:08:35.897 size: 0.125366 MiB name: RG_ring_2_106107 00:08:35.897 size: 0.015991 MiB name: RG_ring_3_106107 00:08:35.897 end memzones------- 00:08:35.897 13:32:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:35.897 heap id: 0 total size: 820.000000 MiB number of busy elements: 224 number of free elements: 18 00:08:35.897 list of free elements. size: 18.470215 MiB 00:08:35.897 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:35.897 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:35.897 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:35.897 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:35.897 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:35.897 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:35.897 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:35.897 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:35.897 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:35.897 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:35.897 element at address: 0x200019900040 with size: 0.937256 MiB 00:08:35.897 element at address: 0x200000200000 with size: 0.835083 MiB 00:08:35.897 element at address: 0x20001b000000 with size: 0.560974 MiB 00:08:35.897 element at address: 0x200019200000 with size: 0.489197 MiB 00:08:35.897 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:35.897 element at address: 0x200013800000 with size: 0.468628 MiB 00:08:35.897 element at address: 0x200028400000 with size: 0.399963 MiB 00:08:35.897 element at address: 0x200003a00000 with size: 0.356140 MiB 00:08:35.897 list of standard malloc elements. size: 199.265381 MiB 00:08:35.897 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:35.897 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:35.897 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:35.897 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:35.897 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:35.897 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:35.897 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:35.897 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:35.897 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:08:35.897 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:08:35.897 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:35.897 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:35.897 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:35.897 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:35.897 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:35.897 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:08:35.897 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:08:35.897 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:35.898 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:35.898 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:35.899 element at address: 0x200028466640 with size: 0.000244 MiB 00:08:35.899 element at address: 0x200028466740 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846d400 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:35.899 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:35.899 list of memzone associated elements. size: 602.264404 MiB 00:08:35.899 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:35.899 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:35.899 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:35.899 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:35.899 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:35.899 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_106107_0 00:08:35.899 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:35.899 associated memzone info: size: 48.002930 MiB name: MP_evtpool_106107_0 00:08:35.899 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:35.899 associated memzone info: size: 48.002930 MiB name: MP_msgpool_106107_0 00:08:35.899 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:35.899 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:35.899 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:35.899 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:35.899 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:35.899 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_106107 00:08:35.899 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:35.899 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_106107 00:08:35.899 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:35.899 associated memzone info: size: 1.007996 MiB name: MP_evtpool_106107 00:08:35.899 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:35.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:35.899 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:35.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:35.899 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:35.899 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:35.899 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:35.899 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:35.899 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:35.899 associated memzone info: size: 1.000366 MiB name: RG_ring_0_106107 00:08:35.899 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:35.899 associated memzone info: size: 1.000366 MiB name: RG_ring_1_106107 00:08:35.899 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:35.899 associated memzone info: size: 1.000366 MiB name: RG_ring_4_106107 00:08:35.899 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:35.899 associated memzone info: size: 1.000366 MiB name: RG_ring_5_106107 00:08:35.899 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:35.899 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_106107 00:08:35.899 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:35.899 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:35.899 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:35.899 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:35.899 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:35.899 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:35.899 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:35.899 associated memzone info: size: 0.125366 MiB name: RG_ring_2_106107 00:08:35.899 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:35.899 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:35.899 element at address: 0x200028466840 with size: 0.023804 MiB 00:08:35.899 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:35.899 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:35.899 associated memzone info: size: 0.015991 MiB name: RG_ring_3_106107 00:08:35.899 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:08:35.899 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:35.899 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:08:35.899 associated memzone info: size: 0.000183 MiB name: MP_msgpool_106107 00:08:35.899 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:35.899 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_106107 00:08:35.899 element at address: 0x20002846d500 with size: 0.000366 MiB 00:08:35.899 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:35.899 13:32:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:35.900 13:32:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 106107 00:08:35.900 13:32:15 -- common/autotest_common.sh@926 -- # '[' -z 106107 ']' 00:08:35.900 13:32:15 -- common/autotest_common.sh@930 -- # kill -0 106107 00:08:35.900 13:32:15 -- common/autotest_common.sh@931 -- # uname 00:08:35.900 13:32:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:35.900 13:32:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106107 00:08:35.900 13:32:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:35.900 13:32:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:35.900 13:32:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106107' 00:08:35.900 killing process with pid 106107 00:08:35.900 13:32:15 -- common/autotest_common.sh@945 -- # kill 106107 00:08:35.900 13:32:15 -- common/autotest_common.sh@950 -- # wait 106107 00:08:38.433 ************************************ 00:08:38.433 END TEST dpdk_mem_utility 00:08:38.433 ************************************ 00:08:38.433 00:08:38.433 real 0m4.028s 00:08:38.433 user 0m4.174s 00:08:38.433 sys 0m0.470s 00:08:38.433 13:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.433 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:08:38.433 13:32:17 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:38.433 13:32:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:38.433 13:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.433 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:08:38.433 ************************************ 00:08:38.433 START TEST event 00:08:38.433 ************************************ 00:08:38.433 13:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:38.433 * Looking for test storage... 00:08:38.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:38.433 13:32:17 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:38.433 13:32:17 -- bdev/nbd_common.sh@6 -- # set -e 00:08:38.433 13:32:17 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:38.433 13:32:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:38.433 13:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.433 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:08:38.433 ************************************ 00:08:38.433 START TEST event_perf 00:08:38.433 ************************************ 00:08:38.433 13:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:38.433 Running I/O for 1 seconds...[2024-07-10 13:32:17.522531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:38.433 [2024-07-10 13:32:17.523065] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106224 ] 00:08:38.433 [2024-07-10 13:32:17.692035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.692 [2024-07-10 13:32:17.899681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.692 [2024-07-10 13:32:17.899772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.692 [2024-07-10 13:32:17.899966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.692 Running I/O for 1 seconds...[2024-07-10 13:32:17.899987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.072 00:08:40.072 lcore 0: 163817 00:08:40.072 lcore 1: 163815 00:08:40.072 lcore 2: 163817 00:08:40.072 lcore 3: 163817 00:08:40.072 done. 00:08:40.072 ************************************ 00:08:40.072 END TEST event_perf 00:08:40.072 ************************************ 00:08:40.072 00:08:40.072 real 0m1.834s 00:08:40.072 user 0m4.593s 00:08:40.072 sys 0m0.136s 00:08:40.072 13:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.072 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 13:32:19 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:40.072 13:32:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:40.072 13:32:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.072 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 ************************************ 00:08:40.072 START TEST event_reactor 00:08:40.072 ************************************ 00:08:40.072 13:32:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:40.072 [2024-07-10 13:32:19.422645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:40.072 [2024-07-10 13:32:19.422844] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106274 ] 00:08:40.332 [2024-07-10 13:32:19.579069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.591 [2024-07-10 13:32:19.795299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.972 test_start 00:08:41.972 oneshot 00:08:41.972 tick 100 00:08:41.972 tick 100 00:08:41.972 tick 250 00:08:41.972 tick 100 00:08:41.972 tick 100 00:08:41.972 tick 100 00:08:41.972 tick 250 00:08:41.972 tick 500 00:08:41.972 tick 100 00:08:41.972 tick 100 00:08:41.972 tick 250 00:08:41.972 tick 100 00:08:41.972 tick 100 00:08:41.972 test_end 00:08:41.972 ************************************ 00:08:41.972 END TEST event_reactor 00:08:41.972 ************************************ 00:08:41.972 00:08:41.972 real 0m1.853s 00:08:41.972 user 0m1.656s 00:08:41.972 sys 0m0.096s 00:08:41.972 13:32:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.972 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:08:41.972 13:32:21 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:41.972 13:32:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:41.972 13:32:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.972 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:08:41.972 ************************************ 00:08:41.972 START TEST event_reactor_perf 00:08:41.972 ************************************ 00:08:41.972 13:32:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:42.231 [2024-07-10 13:32:21.338135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:42.232 [2024-07-10 13:32:21.338327] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106325 ] 00:08:42.232 [2024-07-10 13:32:21.496470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.491 [2024-07-10 13:32:21.665880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.871 test_start 00:08:43.871 test_end 00:08:43.871 Performance: 426831 events per second 00:08:43.871 ************************************ 00:08:43.871 END TEST event_reactor_perf 00:08:43.871 ************************************ 00:08:43.871 00:08:43.871 real 0m1.741s 00:08:43.871 user 0m1.530s 00:08:43.871 sys 0m0.110s 00:08:43.871 13:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.871 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:08:43.871 13:32:23 -- event/event.sh@49 -- # uname -s 00:08:43.871 13:32:23 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:43.871 13:32:23 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:43.871 13:32:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:43.871 13:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.871 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:08:43.871 ************************************ 00:08:43.871 START TEST event_scheduler 00:08:43.871 ************************************ 00:08:43.871 13:32:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:43.871 * Looking for test storage... 00:08:43.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:43.871 13:32:23 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:43.871 13:32:23 -- scheduler/scheduler.sh@35 -- # scheduler_pid=106411 00:08:43.871 13:32:23 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:43.871 13:32:23 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.871 13:32:23 -- scheduler/scheduler.sh@37 -- # waitforlisten 106411 00:08:43.871 13:32:23 -- common/autotest_common.sh@819 -- # '[' -z 106411 ']' 00:08:43.871 13:32:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.871 13:32:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:43.871 13:32:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.871 13:32:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:43.871 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:08:44.129 [2024-07-10 13:32:23.289400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:44.129 [2024-07-10 13:32:23.289607] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106411 ] 00:08:44.129 [2024-07-10 13:32:23.457308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.389 [2024-07-10 13:32:23.655783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.389 [2024-07-10 13:32:23.656162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.389 [2024-07-10 13:32:23.655990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.389 [2024-07-10 13:32:23.656172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.957 13:32:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:44.957 13:32:24 -- common/autotest_common.sh@852 -- # return 0 00:08:44.957 13:32:24 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:44.957 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.957 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:44.957 POWER: Env isn't set yet! 00:08:44.957 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:44.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.957 POWER: Cannot set governor of lcore 0 to userspace 00:08:44.957 POWER: Attempting to initialise PSTAT power management... 00:08:44.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.957 POWER: Cannot set governor of lcore 0 to performance 00:08:44.957 POWER: Attempting to initialise AMD PSTATE power management... 00:08:44.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.957 POWER: Cannot set governor of lcore 0 to userspace 00:08:44.957 POWER: Attempting to initialise CPPC power management... 00:08:44.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:44.957 POWER: Cannot set governor of lcore 0 to userspace 00:08:44.957 POWER: Attempting to initialise VM power management... 00:08:44.957 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:44.957 POWER: Unable to set Power Management Environment for lcore 0 00:08:44.957 [2024-07-10 13:32:24.129561] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:44.957 [2024-07-10 13:32:24.129632] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:44.957 [2024-07-10 13:32:24.129669] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:44.958 [2024-07-10 13:32:24.129744] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:44.958 [2024-07-10 13:32:24.129813] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:44.958 [2024-07-10 13:32:24.129867] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:44.958 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.958 13:32:24 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:44.958 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.958 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 [2024-07-10 13:32:24.503491] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:45.217 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.217 13:32:24 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:45.217 13:32:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.217 13:32:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.217 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 ************************************ 00:08:45.217 START TEST scheduler_create_thread 00:08:45.217 ************************************ 00:08:45.217 13:32:24 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:08:45.217 13:32:24 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:45.217 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.217 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 2 00:08:45.217 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.217 13:32:24 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:45.217 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.217 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 3 00:08:45.217 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.217 13:32:24 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:45.217 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.217 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 4 00:08:45.217 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.217 13:32:24 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:45.217 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.217 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 5 00:08:45.217 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.217 13:32:24 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:45.217 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.217 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 6 00:08:45.477 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.477 13:32:24 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:45.477 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.477 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 7 00:08:45.477 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.477 13:32:24 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:45.477 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.477 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 8 00:08:45.477 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.477 13:32:24 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:45.477 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.477 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 9 00:08:45.477 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.477 13:32:24 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:45.477 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.477 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 10 00:08:45.477 13:32:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.477 13:32:24 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:45.477 13:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:45.477 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:08:46.857 13:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.857 13:32:25 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:46.857 13:32:25 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:46.857 13:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.857 13:32:25 -- common/autotest_common.sh@10 -- # set +x 00:08:47.426 13:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.426 13:32:26 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:47.426 13:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.426 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:08:48.363 13:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.364 13:32:27 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:48.364 13:32:27 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:48.364 13:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.364 13:32:27 -- common/autotest_common.sh@10 -- # set +x 00:08:49.300 ************************************ 00:08:49.300 END TEST scheduler_create_thread 00:08:49.300 ************************************ 00:08:49.300 13:32:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.300 00:08:49.300 real 0m3.899s 00:08:49.300 user 0m0.013s 00:08:49.300 sys 0m0.011s 00:08:49.300 13:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.300 13:32:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.300 13:32:28 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:49.300 13:32:28 -- scheduler/scheduler.sh@46 -- # killprocess 106411 00:08:49.300 13:32:28 -- common/autotest_common.sh@926 -- # '[' -z 106411 ']' 00:08:49.300 13:32:28 -- common/autotest_common.sh@930 -- # kill -0 106411 00:08:49.300 13:32:28 -- common/autotest_common.sh@931 -- # uname 00:08:49.300 13:32:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:49.300 13:32:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106411 00:08:49.300 killing process with pid 106411 00:08:49.300 13:32:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:49.300 13:32:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:49.300 13:32:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106411' 00:08:49.300 13:32:28 -- common/autotest_common.sh@945 -- # kill 106411 00:08:49.300 13:32:28 -- common/autotest_common.sh@950 -- # wait 106411 00:08:49.559 [2024-07-10 13:32:28.796352] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:50.935 ************************************ 00:08:50.935 END TEST event_scheduler 00:08:50.935 ************************************ 00:08:50.935 00:08:50.935 real 0m7.107s 00:08:50.935 user 0m14.023s 00:08:50.935 sys 0m0.495s 00:08:50.935 13:32:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.935 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:08:50.935 13:32:30 -- event/event.sh@51 -- # modprobe -n nbd 00:08:50.935 13:32:30 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:50.935 13:32:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.935 13:32:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.935 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:08:50.935 ************************************ 00:08:50.935 START TEST app_repeat 00:08:50.935 ************************************ 00:08:50.935 13:32:30 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:08:50.935 13:32:30 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.936 13:32:30 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:08:50.936 13:32:30 -- event/event.sh@13 -- # local nbd_list 00:08:50.936 13:32:30 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:08:50.936 13:32:30 -- event/event.sh@14 -- # local bdev_list 00:08:50.936 13:32:30 -- event/event.sh@15 -- # local repeat_times=4 00:08:50.936 13:32:30 -- event/event.sh@17 -- # modprobe nbd 00:08:50.936 13:32:30 -- event/event.sh@19 -- # repeat_pid=106558 00:08:50.936 13:32:30 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:50.936 13:32:30 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:50.936 13:32:30 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106558' 00:08:50.936 Process app_repeat pid: 106558 00:08:50.936 13:32:30 -- event/event.sh@23 -- # for i in {0..2} 00:08:50.936 13:32:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:50.936 spdk_app_start Round 0 00:08:50.936 13:32:30 -- event/event.sh@25 -- # waitforlisten 106558 /var/tmp/spdk-nbd.sock 00:08:50.936 13:32:30 -- common/autotest_common.sh@819 -- # '[' -z 106558 ']' 00:08:50.936 13:32:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:50.936 13:32:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.936 13:32:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:50.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:50.936 13:32:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.936 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.193 [2024-07-10 13:32:30.339651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:51.193 [2024-07-10 13:32:30.339852] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106558 ] 00:08:51.193 [2024-07-10 13:32:30.503963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.451 [2024-07-10 13:32:30.705550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.451 [2024-07-10 13:32:30.705552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.016 13:32:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.016 13:32:31 -- common/autotest_common.sh@852 -- # return 0 00:08:52.016 13:32:31 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.274 Malloc0 00:08:52.274 13:32:31 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.548 Malloc1 00:08:52.548 13:32:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@12 -- # local i 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.548 13:32:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:52.813 /dev/nbd0 00:08:52.813 13:32:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:52.813 13:32:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:52.813 13:32:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:52.813 13:32:31 -- common/autotest_common.sh@857 -- # local i 00:08:52.813 13:32:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:52.813 13:32:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:52.813 13:32:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:52.813 13:32:31 -- common/autotest_common.sh@861 -- # break 00:08:52.813 13:32:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:52.813 13:32:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:52.813 13:32:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.813 1+0 records in 00:08:52.813 1+0 records out 00:08:52.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299493 s, 13.7 MB/s 00:08:52.813 13:32:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.813 13:32:31 -- common/autotest_common.sh@874 -- # size=4096 00:08:52.813 13:32:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.813 13:32:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:52.813 13:32:31 -- common/autotest_common.sh@877 -- # return 0 00:08:52.813 13:32:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.813 13:32:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.813 13:32:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:52.813 /dev/nbd1 00:08:52.813 13:32:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:52.813 13:32:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:52.813 13:32:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:52.813 13:32:32 -- common/autotest_common.sh@857 -- # local i 00:08:52.813 13:32:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:52.813 13:32:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:52.813 13:32:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:52.813 13:32:32 -- common/autotest_common.sh@861 -- # break 00:08:52.813 13:32:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:52.813 13:32:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:52.813 13:32:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.813 1+0 records in 00:08:52.813 1+0 records out 00:08:52.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312565 s, 13.1 MB/s 00:08:52.814 13:32:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.814 13:32:32 -- common/autotest_common.sh@874 -- # size=4096 00:08:52.814 13:32:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.814 13:32:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:52.814 13:32:32 -- common/autotest_common.sh@877 -- # return 0 00:08:52.814 13:32:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.814 13:32:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.814 13:32:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.814 13:32:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:53.073 { 00:08:53.073 "nbd_device": "/dev/nbd0", 00:08:53.073 "bdev_name": "Malloc0" 00:08:53.073 }, 00:08:53.073 { 00:08:53.073 "nbd_device": "/dev/nbd1", 00:08:53.073 "bdev_name": "Malloc1" 00:08:53.073 } 00:08:53.073 ]' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:53.073 { 00:08:53.073 "nbd_device": "/dev/nbd0", 00:08:53.073 "bdev_name": "Malloc0" 00:08:53.073 }, 00:08:53.073 { 00:08:53.073 "nbd_device": "/dev/nbd1", 00:08:53.073 "bdev_name": "Malloc1" 00:08:53.073 } 00:08:53.073 ]' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:53.073 /dev/nbd1' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:53.073 /dev/nbd1' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@65 -- # count=2 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@95 -- # count=2 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:53.073 13:32:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:53.332 256+0 records in 00:08:53.332 256+0 records out 00:08:53.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142297 s, 73.7 MB/s 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:53.332 256+0 records in 00:08:53.332 256+0 records out 00:08:53.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239076 s, 43.9 MB/s 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:53.332 256+0 records in 00:08:53.332 256+0 records out 00:08:53.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03149 s, 33.3 MB/s 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@51 -- # local i 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.332 13:32:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@41 -- # break 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.592 13:32:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@41 -- # break 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.851 13:32:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@65 -- # true 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@65 -- # count=0 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@104 -- # count=0 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:54.110 13:32:33 -- bdev/nbd_common.sh@109 -- # return 0 00:08:54.110 13:32:33 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:54.369 13:32:33 -- event/event.sh@35 -- # sleep 3 00:08:55.749 [2024-07-10 13:32:34.819430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.749 [2024-07-10 13:32:35.014162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.749 [2024-07-10 13:32:35.014166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.009 [2024-07-10 13:32:35.211521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:56.009 [2024-07-10 13:32:35.211745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:57.390 spdk_app_start Round 1 00:08:57.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:57.390 13:32:36 -- event/event.sh@23 -- # for i in {0..2} 00:08:57.390 13:32:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:57.390 13:32:36 -- event/event.sh@25 -- # waitforlisten 106558 /var/tmp/spdk-nbd.sock 00:08:57.390 13:32:36 -- common/autotest_common.sh@819 -- # '[' -z 106558 ']' 00:08:57.390 13:32:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:57.390 13:32:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.390 13:32:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:57.390 13:32:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.390 13:32:36 -- common/autotest_common.sh@10 -- # set +x 00:08:57.649 13:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.649 13:32:36 -- common/autotest_common.sh@852 -- # return 0 00:08:57.649 13:32:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.909 Malloc0 00:08:57.909 13:32:37 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:58.173 Malloc1 00:08:58.173 13:32:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@12 -- # local i 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:58.173 /dev/nbd0 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:58.173 13:32:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:58.173 13:32:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:58.173 13:32:37 -- common/autotest_common.sh@857 -- # local i 00:08:58.173 13:32:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:58.173 13:32:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:58.173 13:32:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:58.173 13:32:37 -- common/autotest_common.sh@861 -- # break 00:08:58.173 13:32:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:58.173 13:32:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:58.173 13:32:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.173 1+0 records in 00:08:58.173 1+0 records out 00:08:58.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591679 s, 6.9 MB/s 00:08:58.173 13:32:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.433 13:32:37 -- common/autotest_common.sh@874 -- # size=4096 00:08:58.433 13:32:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.433 13:32:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:58.433 13:32:37 -- common/autotest_common.sh@877 -- # return 0 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:58.433 /dev/nbd1 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.433 13:32:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:58.433 13:32:37 -- common/autotest_common.sh@857 -- # local i 00:08:58.433 13:32:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:58.433 13:32:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:58.433 13:32:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:58.433 13:32:37 -- common/autotest_common.sh@861 -- # break 00:08:58.433 13:32:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:58.433 13:32:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:58.433 13:32:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.433 1+0 records in 00:08:58.433 1+0 records out 00:08:58.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403865 s, 10.1 MB/s 00:08:58.433 13:32:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.433 13:32:37 -- common/autotest_common.sh@874 -- # size=4096 00:08:58.433 13:32:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.433 13:32:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:58.433 13:32:37 -- common/autotest_common.sh@877 -- # return 0 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.433 13:32:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.693 { 00:08:58.693 "nbd_device": "/dev/nbd0", 00:08:58.693 "bdev_name": "Malloc0" 00:08:58.693 }, 00:08:58.693 { 00:08:58.693 "nbd_device": "/dev/nbd1", 00:08:58.693 "bdev_name": "Malloc1" 00:08:58.693 } 00:08:58.693 ]' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.693 { 00:08:58.693 "nbd_device": "/dev/nbd0", 00:08:58.693 "bdev_name": "Malloc0" 00:08:58.693 }, 00:08:58.693 { 00:08:58.693 "nbd_device": "/dev/nbd1", 00:08:58.693 "bdev_name": "Malloc1" 00:08:58.693 } 00:08:58.693 ]' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.693 /dev/nbd1' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.693 /dev/nbd1' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.693 13:32:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.693 256+0 records in 00:08:58.693 256+0 records out 00:08:58.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125063 s, 83.8 MB/s 00:08:58.693 13:32:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.693 13:32:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.693 256+0 records in 00:08:58.693 256+0 records out 00:08:58.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244486 s, 42.9 MB/s 00:08:58.693 13:32:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.693 13:32:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.953 256+0 records in 00:08:58.953 256+0 records out 00:08:58.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278943 s, 37.6 MB/s 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@51 -- # local i 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.953 13:32:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@41 -- # break 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.213 13:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@41 -- # break 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.473 13:32:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@65 -- # true 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.732 13:32:38 -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.732 13:32:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:59.992 13:32:39 -- event/event.sh@35 -- # sleep 3 00:09:01.374 [2024-07-10 13:32:40.539840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.634 [2024-07-10 13:32:40.735365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.634 [2024-07-10 13:32:40.735370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.634 [2024-07-10 13:32:40.934795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:01.634 [2024-07-10 13:32:40.934981] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:03.014 spdk_app_start Round 2 00:09:03.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:03.014 13:32:42 -- event/event.sh@23 -- # for i in {0..2} 00:09:03.014 13:32:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:03.014 13:32:42 -- event/event.sh@25 -- # waitforlisten 106558 /var/tmp/spdk-nbd.sock 00:09:03.014 13:32:42 -- common/autotest_common.sh@819 -- # '[' -z 106558 ']' 00:09:03.014 13:32:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:03.014 13:32:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.014 13:32:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:03.014 13:32:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.014 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:09:03.273 13:32:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.273 13:32:42 -- common/autotest_common.sh@852 -- # return 0 00:09:03.273 13:32:42 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.533 Malloc0 00:09:03.533 13:32:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.795 Malloc1 00:09:03.795 13:32:43 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@12 -- # local i 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.795 13:32:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:04.055 /dev/nbd0 00:09:04.055 13:32:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:04.055 13:32:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:04.055 13:32:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:04.055 13:32:43 -- common/autotest_common.sh@857 -- # local i 00:09:04.055 13:32:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:04.055 13:32:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:04.055 13:32:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:04.055 13:32:43 -- common/autotest_common.sh@861 -- # break 00:09:04.055 13:32:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:04.055 13:32:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:04.055 13:32:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.055 1+0 records in 00:09:04.055 1+0 records out 00:09:04.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492882 s, 8.3 MB/s 00:09:04.055 13:32:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.055 13:32:43 -- common/autotest_common.sh@874 -- # size=4096 00:09:04.055 13:32:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.055 13:32:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:04.055 13:32:43 -- common/autotest_common.sh@877 -- # return 0 00:09:04.055 13:32:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.055 13:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.055 13:32:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:04.315 /dev/nbd1 00:09:04.315 13:32:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:04.315 13:32:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:04.315 13:32:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:04.315 13:32:43 -- common/autotest_common.sh@857 -- # local i 00:09:04.315 13:32:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:04.315 13:32:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:04.315 13:32:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:04.315 13:32:43 -- common/autotest_common.sh@861 -- # break 00:09:04.315 13:32:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:04.315 13:32:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:04.316 13:32:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.316 1+0 records in 00:09:04.316 1+0 records out 00:09:04.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647872 s, 6.3 MB/s 00:09:04.316 13:32:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.316 13:32:43 -- common/autotest_common.sh@874 -- # size=4096 00:09:04.316 13:32:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.316 13:32:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:04.316 13:32:43 -- common/autotest_common.sh@877 -- # return 0 00:09:04.316 13:32:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.316 13:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.316 13:32:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.316 13:32:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.316 13:32:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.316 13:32:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:04.316 { 00:09:04.316 "nbd_device": "/dev/nbd0", 00:09:04.316 "bdev_name": "Malloc0" 00:09:04.316 }, 00:09:04.316 { 00:09:04.316 "nbd_device": "/dev/nbd1", 00:09:04.316 "bdev_name": "Malloc1" 00:09:04.316 } 00:09:04.316 ]' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:04.576 { 00:09:04.576 "nbd_device": "/dev/nbd0", 00:09:04.576 "bdev_name": "Malloc0" 00:09:04.576 }, 00:09:04.576 { 00:09:04.576 "nbd_device": "/dev/nbd1", 00:09:04.576 "bdev_name": "Malloc1" 00:09:04.576 } 00:09:04.576 ]' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:04.576 /dev/nbd1' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:04.576 /dev/nbd1' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@65 -- # count=2 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@95 -- # count=2 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:04.576 256+0 records in 00:09:04.576 256+0 records out 00:09:04.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458974 s, 228 MB/s 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:04.576 256+0 records in 00:09:04.576 256+0 records out 00:09:04.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252164 s, 41.6 MB/s 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:04.576 256+0 records in 00:09:04.576 256+0 records out 00:09:04.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304614 s, 34.4 MB/s 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.576 13:32:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@51 -- # local i 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.577 13:32:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@41 -- # break 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.837 13:32:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@41 -- # break 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.096 13:32:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@65 -- # true 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@104 -- # count=0 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:05.355 13:32:44 -- bdev/nbd_common.sh@109 -- # return 0 00:09:05.355 13:32:44 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:05.615 13:32:44 -- event/event.sh@35 -- # sleep 3 00:09:06.995 [2024-07-10 13:32:46.178971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.253 [2024-07-10 13:32:46.372646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.253 [2024-07-10 13:32:46.372652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.253 [2024-07-10 13:32:46.570424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:07.254 [2024-07-10 13:32:46.570625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:08.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.627 13:32:47 -- event/event.sh@38 -- # waitforlisten 106558 /var/tmp/spdk-nbd.sock 00:09:08.627 13:32:47 -- common/autotest_common.sh@819 -- # '[' -z 106558 ']' 00:09:08.627 13:32:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.627 13:32:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.627 13:32:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.627 13:32:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.627 13:32:47 -- common/autotest_common.sh@10 -- # set +x 00:09:08.885 13:32:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.885 13:32:48 -- common/autotest_common.sh@852 -- # return 0 00:09:08.885 13:32:48 -- event/event.sh@39 -- # killprocess 106558 00:09:08.885 13:32:48 -- common/autotest_common.sh@926 -- # '[' -z 106558 ']' 00:09:08.885 13:32:48 -- common/autotest_common.sh@930 -- # kill -0 106558 00:09:08.885 13:32:48 -- common/autotest_common.sh@931 -- # uname 00:09:08.885 13:32:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:08.885 13:32:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106558 00:09:08.885 killing process with pid 106558 00:09:08.885 13:32:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:08.885 13:32:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:08.885 13:32:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106558' 00:09:08.885 13:32:48 -- common/autotest_common.sh@945 -- # kill 106558 00:09:08.885 13:32:48 -- common/autotest_common.sh@950 -- # wait 106558 00:09:10.261 spdk_app_start is called in Round 0. 00:09:10.261 Shutdown signal received, stop current app iteration 00:09:10.261 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:10.261 spdk_app_start is called in Round 1. 00:09:10.261 Shutdown signal received, stop current app iteration 00:09:10.261 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:10.261 spdk_app_start is called in Round 2. 00:09:10.261 Shutdown signal received, stop current app iteration 00:09:10.261 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:10.261 spdk_app_start is called in Round 3. 00:09:10.261 Shutdown signal received, stop current app iteration 00:09:10.261 ************************************ 00:09:10.261 END TEST app_repeat 00:09:10.261 ************************************ 00:09:10.261 13:32:49 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:10.261 13:32:49 -- event/event.sh@42 -- # return 0 00:09:10.261 00:09:10.261 real 0m19.014s 00:09:10.261 user 0m39.592s 00:09:10.261 sys 0m2.256s 00:09:10.261 13:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.261 13:32:49 -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 13:32:49 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:10.261 13:32:49 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:10.261 13:32:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.261 13:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.261 13:32:49 -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 ************************************ 00:09:10.261 START TEST cpu_locks 00:09:10.261 ************************************ 00:09:10.261 13:32:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:10.261 * Looking for test storage... 00:09:10.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:10.261 13:32:49 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:10.261 13:32:49 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:10.261 13:32:49 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:10.261 13:32:49 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:10.261 13:32:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.261 13:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.261 13:32:49 -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 ************************************ 00:09:10.261 START TEST default_locks 00:09:10.261 ************************************ 00:09:10.261 13:32:49 -- common/autotest_common.sh@1104 -- # default_locks 00:09:10.261 13:32:49 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107111 00:09:10.261 13:32:49 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.261 13:32:49 -- event/cpu_locks.sh@47 -- # waitforlisten 107111 00:09:10.261 13:32:49 -- common/autotest_common.sh@819 -- # '[' -z 107111 ']' 00:09:10.261 13:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.261 13:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:10.261 13:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.261 13:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:10.261 13:32:49 -- common/autotest_common.sh@10 -- # set +x 00:09:10.261 [2024-07-10 13:32:49.526840] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:10.261 [2024-07-10 13:32:49.527052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107111 ] 00:09:10.520 [2024-07-10 13:32:49.685818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.778 [2024-07-10 13:32:49.888073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.778 [2024-07-10 13:32:49.888366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.713 13:32:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:11.713 13:32:51 -- common/autotest_common.sh@852 -- # return 0 00:09:11.713 13:32:51 -- event/cpu_locks.sh@49 -- # locks_exist 107111 00:09:11.713 13:32:51 -- event/cpu_locks.sh@22 -- # lslocks -p 107111 00:09:11.713 13:32:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:11.971 13:32:51 -- event/cpu_locks.sh@50 -- # killprocess 107111 00:09:11.972 13:32:51 -- common/autotest_common.sh@926 -- # '[' -z 107111 ']' 00:09:11.972 13:32:51 -- common/autotest_common.sh@930 -- # kill -0 107111 00:09:11.972 13:32:51 -- common/autotest_common.sh@931 -- # uname 00:09:11.972 13:32:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:11.972 13:32:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107111 00:09:11.972 killing process with pid 107111 00:09:11.972 13:32:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:11.972 13:32:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:11.972 13:32:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107111' 00:09:11.972 13:32:51 -- common/autotest_common.sh@945 -- # kill 107111 00:09:11.972 13:32:51 -- common/autotest_common.sh@950 -- # wait 107111 00:09:14.501 13:32:53 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107111 00:09:14.501 13:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:09:14.501 13:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107111 00:09:14.501 13:32:53 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:14.501 13:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:14.501 13:32:53 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:14.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.501 ERROR: process (pid: 107111) is no longer running 00:09:14.501 ************************************ 00:09:14.501 END TEST default_locks 00:09:14.501 ************************************ 00:09:14.501 13:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:14.501 13:32:53 -- common/autotest_common.sh@643 -- # waitforlisten 107111 00:09:14.501 13:32:53 -- common/autotest_common.sh@819 -- # '[' -z 107111 ']' 00:09:14.501 13:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.501 13:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.501 13:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.501 13:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.501 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107111) - No such process 00:09:14.501 13:32:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.501 13:32:53 -- common/autotest_common.sh@852 -- # return 1 00:09:14.501 13:32:53 -- common/autotest_common.sh@643 -- # es=1 00:09:14.501 13:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:14.501 13:32:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:14.501 13:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:14.501 13:32:53 -- event/cpu_locks.sh@54 -- # no_locks 00:09:14.501 13:32:53 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:14.501 13:32:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:14.501 13:32:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:14.501 00:09:14.501 real 0m4.233s 00:09:14.501 user 0m4.269s 00:09:14.501 sys 0m0.518s 00:09:14.501 13:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.501 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.501 13:32:53 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:14.501 13:32:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.501 13:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.501 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.501 ************************************ 00:09:14.501 START TEST default_locks_via_rpc 00:09:14.501 ************************************ 00:09:14.501 13:32:53 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:14.501 13:32:53 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:14.501 13:32:53 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107214 00:09:14.501 13:32:53 -- event/cpu_locks.sh@63 -- # waitforlisten 107214 00:09:14.501 13:32:53 -- common/autotest_common.sh@819 -- # '[' -z 107214 ']' 00:09:14.501 13:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.501 13:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.501 13:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.501 13:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.501 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.501 [2024-07-10 13:32:53.796628] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:14.501 [2024-07-10 13:32:53.796852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107214 ] 00:09:14.759 [2024-07-10 13:32:53.955765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.017 [2024-07-10 13:32:54.186974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:15.017 [2024-07-10 13:32:54.187235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.392 13:32:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.392 13:32:55 -- common/autotest_common.sh@852 -- # return 0 00:09:16.392 13:32:55 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:16.392 13:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:16.392 13:32:55 -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 13:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:16.392 13:32:55 -- event/cpu_locks.sh@67 -- # no_locks 00:09:16.392 13:32:55 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:16.392 13:32:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:16.392 13:32:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:16.392 13:32:55 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:16.392 13:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:16.392 13:32:55 -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 13:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:16.392 13:32:55 -- event/cpu_locks.sh@71 -- # locks_exist 107214 00:09:16.392 13:32:55 -- event/cpu_locks.sh@22 -- # lslocks -p 107214 00:09:16.392 13:32:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.392 13:32:55 -- event/cpu_locks.sh@73 -- # killprocess 107214 00:09:16.392 13:32:55 -- common/autotest_common.sh@926 -- # '[' -z 107214 ']' 00:09:16.392 13:32:55 -- common/autotest_common.sh@930 -- # kill -0 107214 00:09:16.392 13:32:55 -- common/autotest_common.sh@931 -- # uname 00:09:16.392 13:32:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:16.392 13:32:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107214 00:09:16.392 killing process with pid 107214 00:09:16.392 13:32:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:16.392 13:32:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:16.392 13:32:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107214' 00:09:16.392 13:32:55 -- common/autotest_common.sh@945 -- # kill 107214 00:09:16.392 13:32:55 -- common/autotest_common.sh@950 -- # wait 107214 00:09:18.916 ************************************ 00:09:18.916 END TEST default_locks_via_rpc 00:09:18.916 ************************************ 00:09:18.916 00:09:18.916 real 0m3.986s 00:09:18.916 user 0m4.044s 00:09:18.916 sys 0m0.527s 00:09:18.916 13:32:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.916 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:18.916 13:32:57 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:18.916 13:32:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:18.917 13:32:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:18.917 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:18.917 ************************************ 00:09:18.917 START TEST non_locking_app_on_locked_coremask 00:09:18.917 ************************************ 00:09:18.917 13:32:57 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:18.917 13:32:57 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=107293 00:09:18.917 13:32:57 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.917 13:32:57 -- event/cpu_locks.sh@81 -- # waitforlisten 107293 /var/tmp/spdk.sock 00:09:18.917 13:32:57 -- common/autotest_common.sh@819 -- # '[' -z 107293 ']' 00:09:18.917 13:32:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.917 13:32:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:18.917 13:32:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.917 13:32:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:18.917 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:09:18.917 [2024-07-10 13:32:57.864075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:18.917 [2024-07-10 13:32:57.864325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107293 ] 00:09:18.917 [2024-07-10 13:32:58.027598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.917 [2024-07-10 13:32:58.225035] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:18.917 [2024-07-10 13:32:58.225298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.292 13:32:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:20.292 13:32:59 -- common/autotest_common.sh@852 -- # return 0 00:09:20.292 13:32:59 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=107321 00:09:20.292 13:32:59 -- event/cpu_locks.sh@85 -- # waitforlisten 107321 /var/tmp/spdk2.sock 00:09:20.292 13:32:59 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:20.292 13:32:59 -- common/autotest_common.sh@819 -- # '[' -z 107321 ']' 00:09:20.292 13:32:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.292 13:32:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:20.292 13:32:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.292 13:32:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:20.292 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:09:20.292 [2024-07-10 13:32:59.402480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:20.292 [2024-07-10 13:32:59.402713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107321 ] 00:09:20.292 [2024-07-10 13:32:59.550542] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:20.293 [2024-07-10 13:32:59.550614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.859 [2024-07-10 13:32:59.938886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.859 [2024-07-10 13:32:59.939089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.759 13:33:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:22.759 13:33:01 -- common/autotest_common.sh@852 -- # return 0 00:09:22.759 13:33:01 -- event/cpu_locks.sh@87 -- # locks_exist 107293 00:09:22.759 13:33:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:22.759 13:33:01 -- event/cpu_locks.sh@22 -- # lslocks -p 107293 00:09:22.759 13:33:01 -- event/cpu_locks.sh@89 -- # killprocess 107293 00:09:22.759 13:33:01 -- common/autotest_common.sh@926 -- # '[' -z 107293 ']' 00:09:22.759 13:33:01 -- common/autotest_common.sh@930 -- # kill -0 107293 00:09:22.759 13:33:01 -- common/autotest_common.sh@931 -- # uname 00:09:22.759 13:33:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:22.759 13:33:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107293 00:09:22.759 killing process with pid 107293 00:09:22.759 13:33:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:22.759 13:33:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:22.759 13:33:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107293' 00:09:22.759 13:33:01 -- common/autotest_common.sh@945 -- # kill 107293 00:09:22.759 13:33:01 -- common/autotest_common.sh@950 -- # wait 107293 00:09:26.959 13:33:06 -- event/cpu_locks.sh@90 -- # killprocess 107321 00:09:26.959 13:33:06 -- common/autotest_common.sh@926 -- # '[' -z 107321 ']' 00:09:26.959 13:33:06 -- common/autotest_common.sh@930 -- # kill -0 107321 00:09:26.959 13:33:06 -- common/autotest_common.sh@931 -- # uname 00:09:26.959 13:33:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:26.959 13:33:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107321 00:09:26.959 13:33:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:26.959 13:33:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:26.959 13:33:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107321' 00:09:26.959 killing process with pid 107321 00:09:26.959 13:33:06 -- common/autotest_common.sh@945 -- # kill 107321 00:09:26.959 13:33:06 -- common/autotest_common.sh@950 -- # wait 107321 00:09:29.490 00:09:29.490 real 0m10.743s 00:09:29.490 user 0m11.125s 00:09:29.490 sys 0m1.136s 00:09:29.490 13:33:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.490 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 ************************************ 00:09:29.490 END TEST non_locking_app_on_locked_coremask 00:09:29.490 ************************************ 00:09:29.490 13:33:08 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:29.490 13:33:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:29.490 13:33:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.490 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 ************************************ 00:09:29.490 START TEST locking_app_on_unlocked_coremask 00:09:29.490 ************************************ 00:09:29.490 13:33:08 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:29.490 13:33:08 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=107484 00:09:29.490 13:33:08 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:29.490 13:33:08 -- event/cpu_locks.sh@99 -- # waitforlisten 107484 /var/tmp/spdk.sock 00:09:29.490 13:33:08 -- common/autotest_common.sh@819 -- # '[' -z 107484 ']' 00:09:29.490 13:33:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.490 13:33:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.490 13:33:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.490 13:33:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.490 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 [2024-07-10 13:33:08.667844] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:29.490 [2024-07-10 13:33:08.668538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107484 ] 00:09:29.490 [2024-07-10 13:33:08.825619] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:29.490 [2024-07-10 13:33:08.825802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.752 [2024-07-10 13:33:09.028530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:29.752 [2024-07-10 13:33:09.028801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.134 13:33:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.135 13:33:10 -- common/autotest_common.sh@852 -- # return 0 00:09:31.135 13:33:10 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=107519 00:09:31.135 13:33:10 -- event/cpu_locks.sh@103 -- # waitforlisten 107519 /var/tmp/spdk2.sock 00:09:31.135 13:33:10 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:31.135 13:33:10 -- common/autotest_common.sh@819 -- # '[' -z 107519 ']' 00:09:31.135 13:33:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.135 13:33:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:31.135 13:33:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.135 13:33:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:31.135 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:09:31.135 [2024-07-10 13:33:10.202097] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:31.135 [2024-07-10 13:33:10.202333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107519 ] 00:09:31.135 [2024-07-10 13:33:10.351211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.702 [2024-07-10 13:33:10.756393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.702 [2024-07-10 13:33:10.756582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.079 13:33:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:33.079 13:33:12 -- common/autotest_common.sh@852 -- # return 0 00:09:33.079 13:33:12 -- event/cpu_locks.sh@105 -- # locks_exist 107519 00:09:33.079 13:33:12 -- event/cpu_locks.sh@22 -- # lslocks -p 107519 00:09:33.079 13:33:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:33.337 13:33:12 -- event/cpu_locks.sh@107 -- # killprocess 107484 00:09:33.337 13:33:12 -- common/autotest_common.sh@926 -- # '[' -z 107484 ']' 00:09:33.337 13:33:12 -- common/autotest_common.sh@930 -- # kill -0 107484 00:09:33.337 13:33:12 -- common/autotest_common.sh@931 -- # uname 00:09:33.337 13:33:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:33.337 13:33:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107484 00:09:33.337 killing process with pid 107484 00:09:33.337 13:33:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.337 13:33:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.337 13:33:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107484' 00:09:33.337 13:33:12 -- common/autotest_common.sh@945 -- # kill 107484 00:09:33.337 13:33:12 -- common/autotest_common.sh@950 -- # wait 107484 00:09:38.625 13:33:17 -- event/cpu_locks.sh@108 -- # killprocess 107519 00:09:38.625 13:33:17 -- common/autotest_common.sh@926 -- # '[' -z 107519 ']' 00:09:38.625 13:33:17 -- common/autotest_common.sh@930 -- # kill -0 107519 00:09:38.625 13:33:17 -- common/autotest_common.sh@931 -- # uname 00:09:38.625 13:33:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:38.625 13:33:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107519 00:09:38.625 killing process with pid 107519 00:09:38.625 13:33:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:38.625 13:33:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:38.625 13:33:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107519' 00:09:38.625 13:33:17 -- common/autotest_common.sh@945 -- # kill 107519 00:09:38.625 13:33:17 -- common/autotest_common.sh@950 -- # wait 107519 00:09:40.002 ************************************ 00:09:40.002 END TEST locking_app_on_unlocked_coremask 00:09:40.002 ************************************ 00:09:40.002 00:09:40.002 real 0m10.678s 00:09:40.002 user 0m11.064s 00:09:40.002 sys 0m1.084s 00:09:40.002 13:33:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.002 13:33:19 -- common/autotest_common.sh@10 -- # set +x 00:09:40.002 13:33:19 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:40.002 13:33:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.002 13:33:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.002 13:33:19 -- common/autotest_common.sh@10 -- # set +x 00:09:40.002 ************************************ 00:09:40.002 START TEST locking_app_on_locked_coremask 00:09:40.002 ************************************ 00:09:40.002 13:33:19 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:40.002 13:33:19 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=107679 00:09:40.002 13:33:19 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:40.002 13:33:19 -- event/cpu_locks.sh@116 -- # waitforlisten 107679 /var/tmp/spdk.sock 00:09:40.002 13:33:19 -- common/autotest_common.sh@819 -- # '[' -z 107679 ']' 00:09:40.002 13:33:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.002 13:33:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.002 13:33:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.002 13:33:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.002 13:33:19 -- common/autotest_common.sh@10 -- # set +x 00:09:40.260 [2024-07-10 13:33:19.410176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:40.260 [2024-07-10 13:33:19.410377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107679 ] 00:09:40.260 [2024-07-10 13:33:19.566042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.519 [2024-07-10 13:33:19.765575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.519 [2024-07-10 13:33:19.765821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.895 13:33:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:41.895 13:33:20 -- common/autotest_common.sh@852 -- # return 0 00:09:41.895 13:33:20 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=107710 00:09:41.895 13:33:20 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 107710 /var/tmp/spdk2.sock 00:09:41.895 13:33:20 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:41.895 13:33:20 -- common/autotest_common.sh@640 -- # local es=0 00:09:41.895 13:33:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107710 /var/tmp/spdk2.sock 00:09:41.895 13:33:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:41.895 13:33:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:41.895 13:33:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:41.895 13:33:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:41.895 13:33:20 -- common/autotest_common.sh@643 -- # waitforlisten 107710 /var/tmp/spdk2.sock 00:09:41.895 13:33:20 -- common/autotest_common.sh@819 -- # '[' -z 107710 ']' 00:09:41.895 13:33:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:41.895 13:33:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:41.895 13:33:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:41.895 13:33:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:41.895 13:33:20 -- common/autotest_common.sh@10 -- # set +x 00:09:41.895 [2024-07-10 13:33:20.950215] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:41.895 [2024-07-10 13:33:20.950464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107710 ] 00:09:41.895 [2024-07-10 13:33:21.092832] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 107679 has claimed it. 00:09:41.895 [2024-07-10 13:33:21.092917] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:42.465 ERROR: process (pid: 107710) is no longer running 00:09:42.465 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107710) - No such process 00:09:42.465 13:33:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.465 13:33:21 -- common/autotest_common.sh@852 -- # return 1 00:09:42.465 13:33:21 -- common/autotest_common.sh@643 -- # es=1 00:09:42.465 13:33:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:42.465 13:33:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:42.465 13:33:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:42.465 13:33:21 -- event/cpu_locks.sh@122 -- # locks_exist 107679 00:09:42.465 13:33:21 -- event/cpu_locks.sh@22 -- # lslocks -p 107679 00:09:42.465 13:33:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.465 13:33:21 -- event/cpu_locks.sh@124 -- # killprocess 107679 00:09:42.465 13:33:21 -- common/autotest_common.sh@926 -- # '[' -z 107679 ']' 00:09:42.465 13:33:21 -- common/autotest_common.sh@930 -- # kill -0 107679 00:09:42.465 13:33:21 -- common/autotest_common.sh@931 -- # uname 00:09:42.465 13:33:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:42.465 13:33:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107679 00:09:42.465 killing process with pid 107679 00:09:42.465 13:33:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:42.465 13:33:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:42.465 13:33:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107679' 00:09:42.465 13:33:21 -- common/autotest_common.sh@945 -- # kill 107679 00:09:42.465 13:33:21 -- common/autotest_common.sh@950 -- # wait 107679 00:09:45.004 ************************************ 00:09:45.004 END TEST locking_app_on_locked_coremask 00:09:45.004 ************************************ 00:09:45.004 00:09:45.004 real 0m4.615s 00:09:45.004 user 0m4.847s 00:09:45.004 sys 0m0.635s 00:09:45.004 13:33:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.004 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:09:45.004 13:33:24 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:45.004 13:33:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:45.004 13:33:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.004 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:09:45.004 ************************************ 00:09:45.004 START TEST locking_overlapped_coremask 00:09:45.004 ************************************ 00:09:45.004 13:33:24 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:45.004 13:33:24 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=107793 00:09:45.004 13:33:24 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:45.004 13:33:24 -- event/cpu_locks.sh@133 -- # waitforlisten 107793 /var/tmp/spdk.sock 00:09:45.004 13:33:24 -- common/autotest_common.sh@819 -- # '[' -z 107793 ']' 00:09:45.004 13:33:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.004 13:33:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.004 13:33:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.004 13:33:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.004 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:09:45.004 [2024-07-10 13:33:24.098625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:45.004 [2024-07-10 13:33:24.098833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107793 ] 00:09:45.004 [2024-07-10 13:33:24.266356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.264 [2024-07-10 13:33:24.458496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:45.264 [2024-07-10 13:33:24.458988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.264 [2024-07-10 13:33:24.459136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.264 [2024-07-10 13:33:24.459139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.645 13:33:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.645 13:33:25 -- common/autotest_common.sh@852 -- # return 0 00:09:46.645 13:33:25 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=107823 00:09:46.645 13:33:25 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 107823 /var/tmp/spdk2.sock 00:09:46.645 13:33:25 -- common/autotest_common.sh@640 -- # local es=0 00:09:46.645 13:33:25 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107823 /var/tmp/spdk2.sock 00:09:46.645 13:33:25 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:46.645 13:33:25 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:46.645 13:33:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:46.645 13:33:25 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:46.645 13:33:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:46.645 13:33:25 -- common/autotest_common.sh@643 -- # waitforlisten 107823 /var/tmp/spdk2.sock 00:09:46.645 13:33:25 -- common/autotest_common.sh@819 -- # '[' -z 107823 ']' 00:09:46.645 13:33:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.645 13:33:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.645 13:33:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.645 13:33:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.645 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:09:46.645 [2024-07-10 13:33:25.648126] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:46.645 [2024-07-10 13:33:25.648413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107823 ] 00:09:46.645 [2024-07-10 13:33:25.823220] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107793 has claimed it. 00:09:46.645 [2024-07-10 13:33:25.823325] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:46.904 ERROR: process (pid: 107823) is no longer running 00:09:46.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107823) - No such process 00:09:46.904 13:33:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.904 13:33:26 -- common/autotest_common.sh@852 -- # return 1 00:09:46.904 13:33:26 -- common/autotest_common.sh@643 -- # es=1 00:09:46.904 13:33:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:46.904 13:33:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:46.904 13:33:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:46.904 13:33:26 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:46.904 13:33:26 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:46.904 13:33:26 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:46.904 13:33:26 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:46.904 13:33:26 -- event/cpu_locks.sh@141 -- # killprocess 107793 00:09:46.904 13:33:26 -- common/autotest_common.sh@926 -- # '[' -z 107793 ']' 00:09:46.904 13:33:26 -- common/autotest_common.sh@930 -- # kill -0 107793 00:09:46.904 13:33:26 -- common/autotest_common.sh@931 -- # uname 00:09:46.904 13:33:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:46.904 13:33:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107793 00:09:46.904 killing process with pid 107793 00:09:46.904 13:33:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:46.904 13:33:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:46.904 13:33:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107793' 00:09:46.904 13:33:26 -- common/autotest_common.sh@945 -- # kill 107793 00:09:46.904 13:33:26 -- common/autotest_common.sh@950 -- # wait 107793 00:09:49.442 ************************************ 00:09:49.442 END TEST locking_overlapped_coremask 00:09:49.442 ************************************ 00:09:49.442 00:09:49.442 real 0m4.756s 00:09:49.442 user 0m12.637s 00:09:49.442 sys 0m0.544s 00:09:49.442 13:33:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.442 13:33:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.701 13:33:28 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:49.702 13:33:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.702 13:33:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.702 13:33:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.702 ************************************ 00:09:49.702 START TEST locking_overlapped_coremask_via_rpc 00:09:49.702 ************************************ 00:09:49.702 13:33:28 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:49.702 13:33:28 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=107892 00:09:49.702 13:33:28 -- event/cpu_locks.sh@149 -- # waitforlisten 107892 /var/tmp/spdk.sock 00:09:49.702 13:33:28 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:49.702 13:33:28 -- common/autotest_common.sh@819 -- # '[' -z 107892 ']' 00:09:49.702 13:33:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.702 13:33:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:49.702 13:33:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.702 13:33:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:49.702 13:33:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.702 [2024-07-10 13:33:28.917761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:49.702 [2024-07-10 13:33:28.917949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107892 ] 00:09:49.961 [2024-07-10 13:33:29.081411] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:49.961 [2024-07-10 13:33:29.081589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.961 [2024-07-10 13:33:29.309449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:49.961 [2024-07-10 13:33:29.309946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.961 [2024-07-10 13:33:29.310123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.961 [2024-07-10 13:33:29.310140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.380 13:33:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:51.380 13:33:30 -- common/autotest_common.sh@852 -- # return 0 00:09:51.380 13:33:30 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=107924 00:09:51.380 13:33:30 -- event/cpu_locks.sh@153 -- # waitforlisten 107924 /var/tmp/spdk2.sock 00:09:51.380 13:33:30 -- common/autotest_common.sh@819 -- # '[' -z 107924 ']' 00:09:51.380 13:33:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.380 13:33:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.380 13:33:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.380 13:33:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.380 13:33:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.380 13:33:30 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:51.380 [2024-07-10 13:33:30.454355] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:51.380 [2024-07-10 13:33:30.454560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107924 ] 00:09:51.380 [2024-07-10 13:33:30.617003] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:51.380 [2024-07-10 13:33:30.617064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.654 [2024-07-10 13:33:31.006731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.654 [2024-07-10 13:33:31.007132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.914 [2024-07-10 13:33:31.020244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.914 [2024-07-10 13:33:31.020256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.291 13:33:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.291 13:33:32 -- common/autotest_common.sh@852 -- # return 0 00:09:53.291 13:33:32 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:53.291 13:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.291 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.291 13:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.291 13:33:32 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:53.291 13:33:32 -- common/autotest_common.sh@640 -- # local es=0 00:09:53.291 13:33:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:53.291 13:33:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:53.291 13:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:53.291 13:33:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:53.291 13:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:53.291 13:33:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:53.291 13:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.291 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.291 [2024-07-10 13:33:32.636300] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107892 has claimed it. 00:09:53.291 request: 00:09:53.291 { 00:09:53.291 "method": "framework_enable_cpumask_locks", 00:09:53.291 "req_id": 1 00:09:53.291 } 00:09:53.291 Got JSON-RPC error response 00:09:53.291 response: 00:09:53.291 { 00:09:53.291 "code": -32603, 00:09:53.291 "message": "Failed to claim CPU core: 2" 00:09:53.291 } 00:09:53.291 13:33:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:53.291 13:33:32 -- common/autotest_common.sh@643 -- # es=1 00:09:53.291 13:33:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:53.291 13:33:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:53.291 13:33:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:53.291 13:33:32 -- event/cpu_locks.sh@158 -- # waitforlisten 107892 /var/tmp/spdk.sock 00:09:53.291 13:33:32 -- common/autotest_common.sh@819 -- # '[' -z 107892 ']' 00:09:53.291 13:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.291 13:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:53.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.291 13:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.291 13:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:53.291 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 13:33:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.550 13:33:32 -- common/autotest_common.sh@852 -- # return 0 00:09:53.550 13:33:32 -- event/cpu_locks.sh@159 -- # waitforlisten 107924 /var/tmp/spdk2.sock 00:09:53.550 13:33:32 -- common/autotest_common.sh@819 -- # '[' -z 107924 ']' 00:09:53.550 13:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:53.550 13:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:53.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:53.550 13:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:53.550 13:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:53.550 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.809 13:33:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.809 13:33:33 -- common/autotest_common.sh@852 -- # return 0 00:09:53.809 13:33:33 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:53.809 13:33:33 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:53.809 13:33:33 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:53.809 13:33:33 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:53.809 00:09:53.809 real 0m4.162s 00:09:53.809 user 0m1.374s 00:09:53.809 sys 0m0.231s 00:09:53.809 ************************************ 00:09:53.809 END TEST locking_overlapped_coremask_via_rpc 00:09:53.809 ************************************ 00:09:53.809 13:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.809 13:33:33 -- common/autotest_common.sh@10 -- # set +x 00:09:53.809 13:33:33 -- event/cpu_locks.sh@174 -- # cleanup 00:09:53.809 13:33:33 -- event/cpu_locks.sh@15 -- # [[ -z 107892 ]] 00:09:53.809 13:33:33 -- event/cpu_locks.sh@15 -- # killprocess 107892 00:09:53.809 13:33:33 -- common/autotest_common.sh@926 -- # '[' -z 107892 ']' 00:09:53.809 13:33:33 -- common/autotest_common.sh@930 -- # kill -0 107892 00:09:53.809 13:33:33 -- common/autotest_common.sh@931 -- # uname 00:09:53.809 13:33:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:53.809 13:33:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107892 00:09:53.809 killing process with pid 107892 00:09:53.809 13:33:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:53.809 13:33:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:53.809 13:33:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107892' 00:09:53.809 13:33:33 -- common/autotest_common.sh@945 -- # kill 107892 00:09:53.809 13:33:33 -- common/autotest_common.sh@950 -- # wait 107892 00:09:56.344 13:33:35 -- event/cpu_locks.sh@16 -- # [[ -z 107924 ]] 00:09:56.344 13:33:35 -- event/cpu_locks.sh@16 -- # killprocess 107924 00:09:56.344 13:33:35 -- common/autotest_common.sh@926 -- # '[' -z 107924 ']' 00:09:56.344 13:33:35 -- common/autotest_common.sh@930 -- # kill -0 107924 00:09:56.344 13:33:35 -- common/autotest_common.sh@931 -- # uname 00:09:56.344 13:33:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.344 13:33:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107924 00:09:56.344 killing process with pid 107924 00:09:56.344 13:33:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:56.344 13:33:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:56.344 13:33:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107924' 00:09:56.344 13:33:35 -- common/autotest_common.sh@945 -- # kill 107924 00:09:56.344 13:33:35 -- common/autotest_common.sh@950 -- # wait 107924 00:09:58.875 13:33:37 -- event/cpu_locks.sh@18 -- # rm -f 00:09:58.875 Process with pid 107892 is not found 00:09:58.875 Process with pid 107924 is not found 00:09:58.875 13:33:37 -- event/cpu_locks.sh@1 -- # cleanup 00:09:58.875 13:33:37 -- event/cpu_locks.sh@15 -- # [[ -z 107892 ]] 00:09:58.875 13:33:37 -- event/cpu_locks.sh@15 -- # killprocess 107892 00:09:58.875 13:33:37 -- common/autotest_common.sh@926 -- # '[' -z 107892 ']' 00:09:58.875 13:33:37 -- common/autotest_common.sh@930 -- # kill -0 107892 00:09:58.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107892) - No such process 00:09:58.875 13:33:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107892 is not found' 00:09:58.875 13:33:37 -- event/cpu_locks.sh@16 -- # [[ -z 107924 ]] 00:09:58.875 13:33:37 -- event/cpu_locks.sh@16 -- # killprocess 107924 00:09:58.875 13:33:37 -- common/autotest_common.sh@926 -- # '[' -z 107924 ']' 00:09:58.875 13:33:37 -- common/autotest_common.sh@930 -- # kill -0 107924 00:09:58.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107924) - No such process 00:09:58.875 13:33:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107924 is not found' 00:09:58.875 13:33:37 -- event/cpu_locks.sh@18 -- # rm -f 00:09:58.875 ************************************ 00:09:58.875 END TEST cpu_locks 00:09:58.875 ************************************ 00:09:58.875 00:09:58.875 real 0m48.573s 00:09:58.875 user 1m22.979s 00:09:58.875 sys 0m5.841s 00:09:58.875 13:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.875 13:33:37 -- common/autotest_common.sh@10 -- # set +x 00:09:58.875 ************************************ 00:09:58.875 END TEST event 00:09:58.875 ************************************ 00:09:58.875 00:09:58.875 real 1m20.627s 00:09:58.875 user 2m24.608s 00:09:58.875 sys 0m9.207s 00:09:58.875 13:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.875 13:33:37 -- common/autotest_common.sh@10 -- # set +x 00:09:58.875 13:33:38 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:58.875 13:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:58.875 13:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.875 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:09:58.875 ************************************ 00:09:58.875 START TEST thread 00:09:58.875 ************************************ 00:09:58.875 13:33:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:58.875 * Looking for test storage... 00:09:58.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:58.875 13:33:38 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:58.875 13:33:38 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:58.875 13:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.875 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:09:58.875 ************************************ 00:09:58.875 START TEST thread_poller_perf 00:09:58.875 ************************************ 00:09:58.875 13:33:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:58.875 [2024-07-10 13:33:38.222658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:58.875 [2024-07-10 13:33:38.222837] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108135 ] 00:09:59.133 [2024-07-10 13:33:38.388221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.390 [2024-07-10 13:33:38.625541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.390 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:00.766 ====================================== 00:10:00.766 busy:2299127100 (cyc) 00:10:00.766 total_run_count: 363000 00:10:00.766 tsc_hz: 2290000000 (cyc) 00:10:00.766 ====================================== 00:10:00.766 poller_cost: 6333 (cyc), 2765 (nsec) 00:10:00.766 ************************************ 00:10:00.766 END TEST thread_poller_perf 00:10:00.766 ************************************ 00:10:00.766 00:10:00.766 real 0m1.950s 00:10:00.766 user 0m1.703s 00:10:00.766 sys 0m0.146s 00:10:00.766 13:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.766 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:01.024 13:33:40 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:01.024 13:33:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:01.024 13:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.024 13:33:40 -- common/autotest_common.sh@10 -- # set +x 00:10:01.024 ************************************ 00:10:01.024 START TEST thread_poller_perf 00:10:01.024 ************************************ 00:10:01.024 13:33:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:01.024 [2024-07-10 13:33:40.236918] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:01.024 [2024-07-10 13:33:40.237129] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108185 ] 00:10:01.283 [2024-07-10 13:33:40.398110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.540 [2024-07-10 13:33:40.658152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.540 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:02.918 ====================================== 00:10:02.918 busy:2294902844 (cyc) 00:10:02.918 total_run_count: 5008000 00:10:02.918 tsc_hz: 2290000000 (cyc) 00:10:02.918 ====================================== 00:10:02.918 poller_cost: 458 (cyc), 200 (nsec) 00:10:02.918 ************************************ 00:10:02.918 END TEST thread_poller_perf 00:10:02.918 ************************************ 00:10:02.918 00:10:02.918 real 0m1.933s 00:10:02.918 user 0m1.686s 00:10:02.918 sys 0m0.146s 00:10:02.918 13:33:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.918 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 13:33:42 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:02.918 13:33:42 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:02.918 13:33:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:02.918 13:33:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.918 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.918 ************************************ 00:10:02.918 START TEST thread_spdk_lock 00:10:02.918 ************************************ 00:10:02.918 13:33:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:02.918 [2024-07-10 13:33:42.240219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:02.918 [2024-07-10 13:33:42.240449] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108233 ] 00:10:03.177 [2024-07-10 13:33:42.405906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:03.436 [2024-07-10 13:33:42.650440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.436 [2024-07-10 13:33:42.650459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.004 [2024-07-10 13:33:43.163213] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:04.004 [2024-07-10 13:33:43.163423] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:04.004 [2024-07-10 13:33:43.163477] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x558534395840 00:10:04.004 [2024-07-10 13:33:43.172409] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:04.004 [2024-07-10 13:33:43.172599] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:04.004 [2024-07-10 13:33:43.172655] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:04.264 Starting test contend 00:10:04.264 Worker Delay Wait us Hold us Total us 00:10:04.264 0 3 129815 188855 318671 00:10:04.264 1 5 61433 291534 352968 00:10:04.264 PASS test contend 00:10:04.264 Starting test hold_by_poller 00:10:04.264 PASS test hold_by_poller 00:10:04.264 Starting test hold_by_message 00:10:04.264 PASS test hold_by_message 00:10:04.264 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:04.264 100014 assertions passed 00:10:04.264 0 assertions failed 00:10:04.264 ************************************ 00:10:04.264 END TEST thread_spdk_lock 00:10:04.264 ************************************ 00:10:04.264 00:10:04.264 real 0m1.427s 00:10:04.264 user 0m1.705s 00:10:04.264 sys 0m0.146s 00:10:04.264 13:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.264 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.524 ************************************ 00:10:04.524 END TEST thread 00:10:04.524 ************************************ 00:10:04.524 00:10:04.524 real 0m5.620s 00:10:04.524 user 0m5.261s 00:10:04.524 sys 0m0.588s 00:10:04.524 13:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.524 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.524 13:33:43 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:04.524 13:33:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:04.524 13:33:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:04.524 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.524 ************************************ 00:10:04.524 START TEST accel 00:10:04.524 ************************************ 00:10:04.524 13:33:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:04.524 * Looking for test storage... 00:10:04.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:04.524 13:33:43 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:04.524 13:33:43 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:04.524 13:33:43 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:04.524 13:33:43 -- accel/accel.sh@59 -- # spdk_tgt_pid=108342 00:10:04.524 13:33:43 -- accel/accel.sh@60 -- # waitforlisten 108342 00:10:04.524 13:33:43 -- common/autotest_common.sh@819 -- # '[' -z 108342 ']' 00:10:04.524 13:33:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.524 13:33:43 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:04.524 13:33:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:04.524 13:33:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.524 13:33:43 -- accel/accel.sh@58 -- # build_accel_config 00:10:04.524 13:33:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:04.524 13:33:43 -- common/autotest_common.sh@10 -- # set +x 00:10:04.524 13:33:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.524 13:33:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.524 13:33:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.524 13:33:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.524 13:33:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.524 13:33:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.524 13:33:43 -- accel/accel.sh@42 -- # jq -r . 00:10:04.784 [2024-07-10 13:33:43.917185] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.784 [2024-07-10 13:33:43.917411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108342 ] 00:10:04.784 [2024-07-10 13:33:44.082658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.043 [2024-07-10 13:33:44.340578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:05.044 [2024-07-10 13:33:44.340894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.424 [2024-07-10 13:33:45.358001] json_config.c: 128:rpc_client_check_timeout: *WARNING*: RPC client command timeout. 00:10:06.424 13:33:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:06.424 13:33:45 -- common/autotest_common.sh@852 -- # return 0 00:10:06.424 13:33:45 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:06.424 13:33:45 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:06.424 13:33:45 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:06.424 13:33:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:06.424 13:33:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.424 13:33:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # IFS== 00:10:06.424 13:33:45 -- accel/accel.sh@64 -- # read -r opc module 00:10:06.424 13:33:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:06.424 13:33:45 -- accel/accel.sh@67 -- # killprocess 108342 00:10:06.424 13:33:45 -- common/autotest_common.sh@926 -- # '[' -z 108342 ']' 00:10:06.424 13:33:45 -- common/autotest_common.sh@930 -- # kill -0 108342 00:10:06.424 13:33:45 -- common/autotest_common.sh@931 -- # uname 00:10:06.424 13:33:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:06.424 13:33:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108342 00:10:06.424 13:33:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:06.424 13:33:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:06.424 13:33:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108342' 00:10:06.424 killing process with pid 108342 00:10:06.424 13:33:45 -- common/autotest_common.sh@945 -- # kill 108342 00:10:06.424 13:33:45 -- common/autotest_common.sh@950 -- # wait 108342 00:10:09.736 13:33:48 -- accel/accel.sh@68 -- # trap - ERR 00:10:09.736 13:33:48 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:09.736 13:33:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:09.736 13:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.736 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:09.736 13:33:48 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:09.736 13:33:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:09.736 13:33:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.736 13:33:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.736 13:33:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.736 13:33:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.736 13:33:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.736 13:33:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.736 13:33:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.736 13:33:48 -- accel/accel.sh@42 -- # jq -r . 00:10:09.736 13:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.736 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:09.736 13:33:48 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:09.736 13:33:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:09.736 13:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.736 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:10:09.736 ************************************ 00:10:09.736 START TEST accel_missing_filename 00:10:09.736 ************************************ 00:10:09.736 13:33:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:09.736 13:33:48 -- common/autotest_common.sh@640 -- # local es=0 00:10:09.736 13:33:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:09.736 13:33:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:09.736 13:33:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:09.736 13:33:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:09.736 13:33:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:09.736 13:33:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:09.736 13:33:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:09.736 13:33:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.736 13:33:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.736 13:33:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.736 13:33:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.736 13:33:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.736 13:33:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.736 13:33:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.736 13:33:48 -- accel/accel.sh@42 -- # jq -r . 00:10:09.736 [2024-07-10 13:33:48.608489] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:09.736 [2024-07-10 13:33:48.608735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108445 ] 00:10:09.736 [2024-07-10 13:33:48.774854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.736 [2024-07-10 13:33:49.042575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.996 [2024-07-10 13:33:49.329912] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.935 [2024-07-10 13:33:50.009707] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:11.195 A filename is required. 00:10:11.195 ************************************ 00:10:11.195 END TEST accel_missing_filename 00:10:11.195 ************************************ 00:10:11.195 13:33:50 -- common/autotest_common.sh@643 -- # es=234 00:10:11.195 13:33:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:11.195 13:33:50 -- common/autotest_common.sh@652 -- # es=106 00:10:11.195 13:33:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:11.195 13:33:50 -- common/autotest_common.sh@660 -- # es=1 00:10:11.195 13:33:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:11.195 00:10:11.195 real 0m1.966s 00:10:11.195 user 0m1.698s 00:10:11.195 sys 0m0.224s 00:10:11.195 13:33:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.195 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:10:11.455 13:33:50 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:11.455 13:33:50 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:11.455 13:33:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.455 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:10:11.455 ************************************ 00:10:11.455 START TEST accel_compress_verify 00:10:11.455 ************************************ 00:10:11.455 13:33:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:11.455 13:33:50 -- common/autotest_common.sh@640 -- # local es=0 00:10:11.455 13:33:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:11.455 13:33:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:11.455 13:33:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.455 13:33:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:11.455 13:33:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.455 13:33:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:11.455 13:33:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.455 13:33:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:11.455 13:33:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.455 13:33:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.455 13:33:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.455 13:33:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.455 13:33:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.455 13:33:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.455 13:33:50 -- accel/accel.sh@42 -- # jq -r . 00:10:11.455 [2024-07-10 13:33:50.637954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:11.455 [2024-07-10 13:33:50.638134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108488 ] 00:10:11.455 [2024-07-10 13:33:50.799720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.024 [2024-07-10 13:33:51.083002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.284 [2024-07-10 13:33:51.384841] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:12.853 [2024-07-10 13:33:51.970775] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:13.113 00:10:13.113 Compression does not support the verify option, aborting. 00:10:13.113 ************************************ 00:10:13.113 END TEST accel_compress_verify 00:10:13.113 ************************************ 00:10:13.113 13:33:52 -- common/autotest_common.sh@643 -- # es=161 00:10:13.113 13:33:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:13.113 13:33:52 -- common/autotest_common.sh@652 -- # es=33 00:10:13.113 13:33:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:13.113 13:33:52 -- common/autotest_common.sh@660 -- # es=1 00:10:13.113 13:33:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:13.113 00:10:13.113 real 0m1.793s 00:10:13.113 user 0m1.521s 00:10:13.113 sys 0m0.228s 00:10:13.113 13:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.113 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.113 13:33:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:13.113 13:33:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:13.113 13:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.113 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.113 ************************************ 00:10:13.113 START TEST accel_wrong_workload 00:10:13.113 ************************************ 00:10:13.113 13:33:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:13.113 13:33:52 -- common/autotest_common.sh@640 -- # local es=0 00:10:13.113 13:33:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:13.113 13:33:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:13.113 13:33:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:13.113 13:33:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:13.113 13:33:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:13.113 13:33:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:13.113 13:33:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:13.113 13:33:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.113 13:33:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:13.113 13:33:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.113 13:33:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.113 13:33:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:13.113 13:33:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:13.113 13:33:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:13.113 13:33:52 -- accel/accel.sh@42 -- # jq -r . 00:10:13.374 Unsupported workload type: foobar 00:10:13.374 [2024-07-10 13:33:52.489306] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:13.374 accel_perf options: 00:10:13.374 [-h help message] 00:10:13.374 [-q queue depth per core] 00:10:13.374 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:13.374 [-T number of threads per core 00:10:13.374 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:13.374 [-t time in seconds] 00:10:13.374 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:13.374 [ dif_verify, , dif_generate, dif_generate_copy 00:10:13.374 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:13.374 [-l for compress/decompress workloads, name of uncompressed input file 00:10:13.374 [-S for crc32c workload, use this seed value (default 0) 00:10:13.374 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:13.374 [-f for fill workload, use this BYTE value (default 255) 00:10:13.374 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:13.374 [-y verify result if this switch is on] 00:10:13.374 [-a tasks to allocate per core (default: same value as -q)] 00:10:13.374 Can be used to spread operations across a wider range of memory. 00:10:13.374 13:33:52 -- common/autotest_common.sh@643 -- # es=1 00:10:13.374 13:33:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:13.374 13:33:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:13.374 13:33:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:13.374 00:10:13.374 real 0m0.082s 00:10:13.374 user 0m0.095s 00:10:13.374 sys 0m0.047s 00:10:13.374 13:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.374 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.374 ************************************ 00:10:13.374 END TEST accel_wrong_workload 00:10:13.374 ************************************ 00:10:13.374 13:33:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:13.374 13:33:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:13.374 13:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.374 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.374 ************************************ 00:10:13.374 START TEST accel_negative_buffers 00:10:13.374 ************************************ 00:10:13.374 13:33:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:13.374 13:33:52 -- common/autotest_common.sh@640 -- # local es=0 00:10:13.374 13:33:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:13.374 13:33:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:13.374 13:33:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:13.374 13:33:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:13.374 13:33:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:13.374 13:33:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:13.374 13:33:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:13.374 13:33:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.374 13:33:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:13.374 13:33:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.374 13:33:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.374 13:33:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:13.374 13:33:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:13.374 13:33:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:13.374 13:33:52 -- accel/accel.sh@42 -- # jq -r . 00:10:13.374 -x option must be non-negative. 00:10:13.374 [2024-07-10 13:33:52.629759] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:13.374 accel_perf options: 00:10:13.374 [-h help message] 00:10:13.374 [-q queue depth per core] 00:10:13.374 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:13.374 [-T number of threads per core 00:10:13.374 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:13.374 [-t time in seconds] 00:10:13.374 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:13.374 [ dif_verify, , dif_generate, dif_generate_copy 00:10:13.374 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:13.374 [-l for compress/decompress workloads, name of uncompressed input file 00:10:13.374 [-S for crc32c workload, use this seed value (default 0) 00:10:13.374 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:13.374 [-f for fill workload, use this BYTE value (default 255) 00:10:13.374 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:13.374 [-y verify result if this switch is on] 00:10:13.374 [-a tasks to allocate per core (default: same value as -q)] 00:10:13.374 Can be used to spread operations across a wider range of memory. 00:10:13.374 ************************************ 00:10:13.374 END TEST accel_negative_buffers 00:10:13.374 ************************************ 00:10:13.375 13:33:52 -- common/autotest_common.sh@643 -- # es=1 00:10:13.375 13:33:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:13.375 13:33:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:13.375 13:33:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:13.375 00:10:13.375 real 0m0.081s 00:10:13.375 user 0m0.095s 00:10:13.375 sys 0m0.039s 00:10:13.375 13:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.375 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.375 13:33:52 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:13.375 13:33:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:13.375 13:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.375 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.375 ************************************ 00:10:13.375 START TEST accel_crc32c 00:10:13.375 ************************************ 00:10:13.375 13:33:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:13.375 13:33:52 -- accel/accel.sh@16 -- # local accel_opc 00:10:13.375 13:33:52 -- accel/accel.sh@17 -- # local accel_module 00:10:13.375 13:33:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:13.375 13:33:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:13.375 13:33:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.375 13:33:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:13.375 13:33:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.375 13:33:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.375 13:33:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:13.375 13:33:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:13.375 13:33:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:13.375 13:33:52 -- accel/accel.sh@42 -- # jq -r . 00:10:13.634 [2024-07-10 13:33:52.777946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:13.634 [2024-07-10 13:33:52.778181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108594 ] 00:10:13.634 [2024-07-10 13:33:52.941103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.891 [2024-07-10 13:33:53.150262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.425 13:33:55 -- accel/accel.sh@18 -- # out=' 00:10:16.425 SPDK Configuration: 00:10:16.425 Core mask: 0x1 00:10:16.425 00:10:16.425 Accel Perf Configuration: 00:10:16.425 Workload Type: crc32c 00:10:16.425 CRC-32C seed: 32 00:10:16.425 Transfer size: 4096 bytes 00:10:16.425 Vector count 1 00:10:16.425 Module: software 00:10:16.425 Queue depth: 32 00:10:16.425 Allocate depth: 32 00:10:16.425 # threads/core: 1 00:10:16.425 Run time: 1 seconds 00:10:16.425 Verify: Yes 00:10:16.425 00:10:16.425 Running for 1 seconds... 00:10:16.425 00:10:16.425 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:16.425 ------------------------------------------------------------------------------------ 00:10:16.425 0,0 553280/s 2161 MiB/s 0 0 00:10:16.425 ==================================================================================== 00:10:16.425 Total 553280/s 2161 MiB/s 0 0' 00:10:16.425 13:33:55 -- accel/accel.sh@20 -- # IFS=: 00:10:16.425 13:33:55 -- accel/accel.sh@20 -- # read -r var val 00:10:16.425 13:33:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:16.425 13:33:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:16.425 13:33:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.425 13:33:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.425 13:33:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.425 13:33:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.425 13:33:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.425 13:33:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.425 13:33:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.425 13:33:55 -- accel/accel.sh@42 -- # jq -r . 00:10:16.425 [2024-07-10 13:33:55.419918] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:16.425 [2024-07-10 13:33:55.420174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108645 ] 00:10:16.425 [2024-07-10 13:33:55.587945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.684 [2024-07-10 13:33:55.811731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=0x1 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=crc32c 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=32 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=software 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@23 -- # accel_module=software 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=32 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=32 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=1 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val=Yes 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:16.943 13:33:56 -- accel/accel.sh@21 -- # val= 00:10:16.943 13:33:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # IFS=: 00:10:16.943 13:33:56 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 13:33:58 -- accel/accel.sh@21 -- # val= 00:10:18.845 13:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # IFS=: 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 13:33:58 -- accel/accel.sh@21 -- # val= 00:10:18.845 13:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # IFS=: 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 13:33:58 -- accel/accel.sh@21 -- # val= 00:10:18.845 13:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # IFS=: 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 13:33:58 -- accel/accel.sh@21 -- # val= 00:10:18.845 13:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # IFS=: 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 13:33:58 -- accel/accel.sh@21 -- # val= 00:10:18.845 13:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # IFS=: 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 13:33:58 -- accel/accel.sh@21 -- # val= 00:10:18.845 13:33:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # IFS=: 00:10:18.845 13:33:58 -- accel/accel.sh@20 -- # read -r var val 00:10:18.845 ************************************ 00:10:18.845 END TEST accel_crc32c 00:10:18.845 ************************************ 00:10:18.845 13:33:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:18.845 13:33:58 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:18.845 13:33:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:18.845 00:10:18.845 real 0m5.354s 00:10:18.845 user 0m4.816s 00:10:18.845 sys 0m0.385s 00:10:18.845 13:33:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.845 13:33:58 -- common/autotest_common.sh@10 -- # set +x 00:10:18.845 13:33:58 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:18.845 13:33:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:18.845 13:33:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.845 13:33:58 -- common/autotest_common.sh@10 -- # set +x 00:10:18.845 ************************************ 00:10:18.845 START TEST accel_crc32c_C2 00:10:18.845 ************************************ 00:10:18.845 13:33:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:18.845 13:33:58 -- accel/accel.sh@16 -- # local accel_opc 00:10:18.845 13:33:58 -- accel/accel.sh@17 -- # local accel_module 00:10:18.845 13:33:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:18.845 13:33:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:18.845 13:33:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.845 13:33:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.845 13:33:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.845 13:33:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.845 13:33:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.845 13:33:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.845 13:33:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.845 13:33:58 -- accel/accel.sh@42 -- # jq -r . 00:10:18.845 [2024-07-10 13:33:58.191733] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:18.845 [2024-07-10 13:33:58.191940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108704 ] 00:10:19.104 [2024-07-10 13:33:58.332671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.362 [2024-07-10 13:33:58.557692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.965 13:34:00 -- accel/accel.sh@18 -- # out=' 00:10:21.965 SPDK Configuration: 00:10:21.965 Core mask: 0x1 00:10:21.965 00:10:21.965 Accel Perf Configuration: 00:10:21.965 Workload Type: crc32c 00:10:21.965 CRC-32C seed: 0 00:10:21.965 Transfer size: 4096 bytes 00:10:21.965 Vector count 2 00:10:21.965 Module: software 00:10:21.965 Queue depth: 32 00:10:21.965 Allocate depth: 32 00:10:21.965 # threads/core: 1 00:10:21.966 Run time: 1 seconds 00:10:21.966 Verify: Yes 00:10:21.966 00:10:21.966 Running for 1 seconds... 00:10:21.966 00:10:21.966 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.966 ------------------------------------------------------------------------------------ 00:10:21.966 0,0 427808/s 3342 MiB/s 0 0 00:10:21.966 ==================================================================================== 00:10:21.966 Total 427808/s 1671 MiB/s 0 0' 00:10:21.966 13:34:00 -- accel/accel.sh@20 -- # IFS=: 00:10:21.966 13:34:00 -- accel/accel.sh@20 -- # read -r var val 00:10:21.966 13:34:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:21.966 13:34:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.966 13:34:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:21.966 13:34:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.966 13:34:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.966 13:34:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.966 13:34:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.966 13:34:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.966 13:34:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.966 13:34:00 -- accel/accel.sh@42 -- # jq -r . 00:10:21.966 [2024-07-10 13:34:00.866537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:21.966 [2024-07-10 13:34:00.866732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108739 ] 00:10:21.966 [2024-07-10 13:34:01.024008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.966 [2024-07-10 13:34:01.253536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=0x1 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=crc32c 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=0 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=software 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@23 -- # accel_module=software 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=32 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=32 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=1 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val=Yes 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:22.225 13:34:01 -- accel/accel.sh@21 -- # val= 00:10:22.225 13:34:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # IFS=: 00:10:22.225 13:34:01 -- accel/accel.sh@20 -- # read -r var val 00:10:24.133 13:34:03 -- accel/accel.sh@21 -- # val= 00:10:24.133 13:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:24.133 13:34:03 -- accel/accel.sh@21 -- # val= 00:10:24.133 13:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:24.133 13:34:03 -- accel/accel.sh@21 -- # val= 00:10:24.133 13:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:24.133 13:34:03 -- accel/accel.sh@21 -- # val= 00:10:24.133 13:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:24.133 13:34:03 -- accel/accel.sh@21 -- # val= 00:10:24.133 13:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:24.133 13:34:03 -- accel/accel.sh@21 -- # val= 00:10:24.133 13:34:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # IFS=: 00:10:24.133 13:34:03 -- accel/accel.sh@20 -- # read -r var val 00:10:24.392 ************************************ 00:10:24.392 END TEST accel_crc32c_C2 00:10:24.392 ************************************ 00:10:24.392 13:34:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:24.392 13:34:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:24.392 13:34:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.392 00:10:24.392 real 0m5.361s 00:10:24.392 user 0m4.867s 00:10:24.392 sys 0m0.336s 00:10:24.392 13:34:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.392 13:34:03 -- common/autotest_common.sh@10 -- # set +x 00:10:24.392 13:34:03 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:24.392 13:34:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:24.392 13:34:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.392 13:34:03 -- common/autotest_common.sh@10 -- # set +x 00:10:24.392 ************************************ 00:10:24.392 START TEST accel_copy 00:10:24.392 ************************************ 00:10:24.392 13:34:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:24.392 13:34:03 -- accel/accel.sh@16 -- # local accel_opc 00:10:24.392 13:34:03 -- accel/accel.sh@17 -- # local accel_module 00:10:24.392 13:34:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:24.392 13:34:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:24.392 13:34:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.392 13:34:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.392 13:34:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.393 13:34:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.393 13:34:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.393 13:34:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.393 13:34:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.393 13:34:03 -- accel/accel.sh@42 -- # jq -r . 00:10:24.393 [2024-07-10 13:34:03.600975] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:24.393 [2024-07-10 13:34:03.601225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108799 ] 00:10:24.652 [2024-07-10 13:34:03.764585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.652 [2024-07-10 13:34:04.008726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.186 13:34:06 -- accel/accel.sh@18 -- # out=' 00:10:27.186 SPDK Configuration: 00:10:27.186 Core mask: 0x1 00:10:27.186 00:10:27.186 Accel Perf Configuration: 00:10:27.186 Workload Type: copy 00:10:27.186 Transfer size: 4096 bytes 00:10:27.186 Vector count 1 00:10:27.186 Module: software 00:10:27.186 Queue depth: 32 00:10:27.186 Allocate depth: 32 00:10:27.186 # threads/core: 1 00:10:27.186 Run time: 1 seconds 00:10:27.186 Verify: Yes 00:10:27.186 00:10:27.186 Running for 1 seconds... 00:10:27.186 00:10:27.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:27.186 ------------------------------------------------------------------------------------ 00:10:27.186 0,0 350976/s 1371 MiB/s 0 0 00:10:27.186 ==================================================================================== 00:10:27.186 Total 350976/s 1371 MiB/s 0 0' 00:10:27.186 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.186 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.186 13:34:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:27.186 13:34:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.186 13:34:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:27.186 13:34:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.186 13:34:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.186 13:34:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.186 13:34:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.186 13:34:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.186 13:34:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.186 13:34:06 -- accel/accel.sh@42 -- # jq -r . 00:10:27.186 [2024-07-10 13:34:06.337112] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:27.186 [2024-07-10 13:34:06.337303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108857 ] 00:10:27.186 [2024-07-10 13:34:06.494029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.446 [2024-07-10 13:34:06.713747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=0x1 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=copy 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=software 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@23 -- # accel_module=software 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=32 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=32 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=1 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val=Yes 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:27.706 13:34:06 -- accel/accel.sh@21 -- # val= 00:10:27.706 13:34:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # IFS=: 00:10:27.706 13:34:06 -- accel/accel.sh@20 -- # read -r var val 00:10:29.629 13:34:08 -- accel/accel.sh@21 -- # val= 00:10:29.629 13:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:29.629 13:34:08 -- accel/accel.sh@21 -- # val= 00:10:29.629 13:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:29.629 13:34:08 -- accel/accel.sh@21 -- # val= 00:10:29.629 13:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:29.629 13:34:08 -- accel/accel.sh@21 -- # val= 00:10:29.629 13:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:29.629 13:34:08 -- accel/accel.sh@21 -- # val= 00:10:29.629 13:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:29.629 13:34:08 -- accel/accel.sh@21 -- # val= 00:10:29.629 13:34:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # IFS=: 00:10:29.629 13:34:08 -- accel/accel.sh@20 -- # read -r var val 00:10:29.889 13:34:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:29.889 13:34:08 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:29.889 13:34:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.889 00:10:29.889 real 0m5.450s 00:10:29.889 user 0m4.946s 00:10:29.889 sys 0m0.348s 00:10:29.889 13:34:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.889 ************************************ 00:10:29.889 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:10:29.889 END TEST accel_copy 00:10:29.889 ************************************ 00:10:29.889 13:34:09 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:29.889 13:34:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:29.889 13:34:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.889 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:10:29.889 ************************************ 00:10:29.889 START TEST accel_fill 00:10:29.889 ************************************ 00:10:29.889 13:34:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:29.889 13:34:09 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.889 13:34:09 -- accel/accel.sh@17 -- # local accel_module 00:10:29.889 13:34:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:29.889 13:34:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:29.889 13:34:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.889 13:34:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.889 13:34:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.889 13:34:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.889 13:34:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.889 13:34:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.889 13:34:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.889 13:34:09 -- accel/accel.sh@42 -- # jq -r . 00:10:29.889 [2024-07-10 13:34:09.106185] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:29.889 [2024-07-10 13:34:09.106361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108918 ] 00:10:30.148 [2024-07-10 13:34:09.264011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.148 [2024-07-10 13:34:09.488893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.702 13:34:11 -- accel/accel.sh@18 -- # out=' 00:10:32.702 SPDK Configuration: 00:10:32.702 Core mask: 0x1 00:10:32.702 00:10:32.702 Accel Perf Configuration: 00:10:32.702 Workload Type: fill 00:10:32.702 Fill pattern: 0x80 00:10:32.702 Transfer size: 4096 bytes 00:10:32.702 Vector count 1 00:10:32.702 Module: software 00:10:32.702 Queue depth: 64 00:10:32.702 Allocate depth: 64 00:10:32.702 # threads/core: 1 00:10:32.702 Run time: 1 seconds 00:10:32.702 Verify: Yes 00:10:32.702 00:10:32.702 Running for 1 seconds... 00:10:32.702 00:10:32.702 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:32.702 ------------------------------------------------------------------------------------ 00:10:32.702 0,0 570304/s 2227 MiB/s 0 0 00:10:32.702 ==================================================================================== 00:10:32.702 Total 570304/s 2227 MiB/s 0 0' 00:10:32.702 13:34:11 -- accel/accel.sh@20 -- # IFS=: 00:10:32.702 13:34:11 -- accel/accel.sh@20 -- # read -r var val 00:10:32.702 13:34:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.702 13:34:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.702 13:34:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:32.702 13:34:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.702 13:34:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.702 13:34:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.702 13:34:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.702 13:34:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.702 13:34:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.702 13:34:11 -- accel/accel.sh@42 -- # jq -r . 00:10:32.702 [2024-07-10 13:34:11.805916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:32.702 [2024-07-10 13:34:11.806098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108950 ] 00:10:32.702 [2024-07-10 13:34:11.963253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.960 [2024-07-10 13:34:12.195439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=0x1 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=fill 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=0x80 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=software 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@23 -- # accel_module=software 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=64 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=64 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=1 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val=Yes 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:33.218 13:34:12 -- accel/accel.sh@21 -- # val= 00:10:33.218 13:34:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # IFS=: 00:10:33.218 13:34:12 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 13:34:14 -- accel/accel.sh@21 -- # val= 00:10:35.120 13:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 13:34:14 -- accel/accel.sh@21 -- # val= 00:10:35.120 13:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 13:34:14 -- accel/accel.sh@21 -- # val= 00:10:35.120 13:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 13:34:14 -- accel/accel.sh@21 -- # val= 00:10:35.120 13:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 13:34:14 -- accel/accel.sh@21 -- # val= 00:10:35.120 13:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 13:34:14 -- accel/accel.sh@21 -- # val= 00:10:35.120 13:34:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # IFS=: 00:10:35.120 13:34:14 -- accel/accel.sh@20 -- # read -r var val 00:10:35.120 ************************************ 00:10:35.120 END TEST accel_fill 00:10:35.120 ************************************ 00:10:35.120 13:34:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.120 13:34:14 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:35.120 13:34:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.120 00:10:35.120 real 0m5.420s 00:10:35.120 user 0m4.954s 00:10:35.120 sys 0m0.324s 00:10:35.120 13:34:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.120 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:10:35.377 13:34:14 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:35.377 13:34:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:35.377 13:34:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.377 13:34:14 -- common/autotest_common.sh@10 -- # set +x 00:10:35.377 ************************************ 00:10:35.377 START TEST accel_copy_crc32c 00:10:35.377 ************************************ 00:10:35.377 13:34:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:35.377 13:34:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.377 13:34:14 -- accel/accel.sh@17 -- # local accel_module 00:10:35.377 13:34:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:35.377 13:34:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:35.377 13:34:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.377 13:34:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.377 13:34:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.377 13:34:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.377 13:34:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.377 13:34:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.377 13:34:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.377 13:34:14 -- accel/accel.sh@42 -- # jq -r . 00:10:35.377 [2024-07-10 13:34:14.588130] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:35.377 [2024-07-10 13:34:14.588323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109025 ] 00:10:35.634 [2024-07-10 13:34:14.748827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.634 [2024-07-10 13:34:14.977160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.161 13:34:17 -- accel/accel.sh@18 -- # out=' 00:10:38.161 SPDK Configuration: 00:10:38.161 Core mask: 0x1 00:10:38.161 00:10:38.161 Accel Perf Configuration: 00:10:38.161 Workload Type: copy_crc32c 00:10:38.161 CRC-32C seed: 0 00:10:38.161 Vector size: 4096 bytes 00:10:38.161 Transfer size: 4096 bytes 00:10:38.161 Vector count 1 00:10:38.161 Module: software 00:10:38.161 Queue depth: 32 00:10:38.161 Allocate depth: 32 00:10:38.161 # threads/core: 1 00:10:38.161 Run time: 1 seconds 00:10:38.161 Verify: Yes 00:10:38.161 00:10:38.161 Running for 1 seconds... 00:10:38.161 00:10:38.161 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:38.161 ------------------------------------------------------------------------------------ 00:10:38.161 0,0 280192/s 1094 MiB/s 0 0 00:10:38.161 ==================================================================================== 00:10:38.161 Total 280192/s 1094 MiB/s 0 0' 00:10:38.161 13:34:17 -- accel/accel.sh@20 -- # IFS=: 00:10:38.161 13:34:17 -- accel/accel.sh@20 -- # read -r var val 00:10:38.161 13:34:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:38.161 13:34:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.161 13:34:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:38.161 13:34:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.161 13:34:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.161 13:34:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.161 13:34:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.161 13:34:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.161 13:34:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.161 13:34:17 -- accel/accel.sh@42 -- # jq -r . 00:10:38.161 [2024-07-10 13:34:17.471676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:38.161 [2024-07-10 13:34:17.471920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109065 ] 00:10:38.417 [2024-07-10 13:34:17.630804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.675 [2024-07-10 13:34:17.915025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=0x1 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=0 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=software 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@23 -- # accel_module=software 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=32 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=32 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=1 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.933 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.933 13:34:18 -- accel/accel.sh@21 -- # val=Yes 00:10:38.933 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.934 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.934 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.934 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.934 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.934 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.934 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:38.934 13:34:18 -- accel/accel.sh@21 -- # val= 00:10:38.934 13:34:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.934 13:34:18 -- accel/accel.sh@20 -- # IFS=: 00:10:38.934 13:34:18 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 13:34:20 -- accel/accel.sh@21 -- # val= 00:10:41.473 13:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 13:34:20 -- accel/accel.sh@21 -- # val= 00:10:41.473 13:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 13:34:20 -- accel/accel.sh@21 -- # val= 00:10:41.473 13:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 13:34:20 -- accel/accel.sh@21 -- # val= 00:10:41.473 13:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 13:34:20 -- accel/accel.sh@21 -- # val= 00:10:41.473 13:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 13:34:20 -- accel/accel.sh@21 -- # val= 00:10:41.473 13:34:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # IFS=: 00:10:41.473 13:34:20 -- accel/accel.sh@20 -- # read -r var val 00:10:41.473 ************************************ 00:10:41.473 END TEST accel_copy_crc32c 00:10:41.473 ************************************ 00:10:41.473 13:34:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:41.473 13:34:20 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:41.473 13:34:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.473 00:10:41.473 real 0m5.857s 00:10:41.473 user 0m5.304s 00:10:41.473 sys 0m0.405s 00:10:41.473 13:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.473 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:10:41.473 13:34:20 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:41.473 13:34:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:41.473 13:34:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:41.473 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:10:41.473 ************************************ 00:10:41.473 START TEST accel_copy_crc32c_C2 00:10:41.473 ************************************ 00:10:41.473 13:34:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:41.473 13:34:20 -- accel/accel.sh@16 -- # local accel_opc 00:10:41.473 13:34:20 -- accel/accel.sh@17 -- # local accel_module 00:10:41.473 13:34:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:41.474 13:34:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:41.474 13:34:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.474 13:34:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.474 13:34:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.474 13:34:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.474 13:34:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.474 13:34:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.474 13:34:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.474 13:34:20 -- accel/accel.sh@42 -- # jq -r . 00:10:41.474 [2024-07-10 13:34:20.507118] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:41.474 [2024-07-10 13:34:20.507331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109124 ] 00:10:41.474 [2024-07-10 13:34:20.665348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.732 [2024-07-10 13:34:20.971183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.267 13:34:23 -- accel/accel.sh@18 -- # out=' 00:10:44.267 SPDK Configuration: 00:10:44.267 Core mask: 0x1 00:10:44.267 00:10:44.267 Accel Perf Configuration: 00:10:44.267 Workload Type: copy_crc32c 00:10:44.267 CRC-32C seed: 0 00:10:44.267 Vector size: 4096 bytes 00:10:44.267 Transfer size: 8192 bytes 00:10:44.267 Vector count 2 00:10:44.267 Module: software 00:10:44.267 Queue depth: 32 00:10:44.267 Allocate depth: 32 00:10:44.267 # threads/core: 1 00:10:44.267 Run time: 1 seconds 00:10:44.267 Verify: Yes 00:10:44.267 00:10:44.267 Running for 1 seconds... 00:10:44.267 00:10:44.267 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:44.267 ------------------------------------------------------------------------------------ 00:10:44.267 0,0 168352/s 1315 MiB/s 0 0 00:10:44.267 ==================================================================================== 00:10:44.267 Total 168352/s 657 MiB/s 0 0' 00:10:44.267 13:34:23 -- accel/accel.sh@20 -- # IFS=: 00:10:44.267 13:34:23 -- accel/accel.sh@20 -- # read -r var val 00:10:44.267 13:34:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:44.267 13:34:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:44.267 13:34:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.267 13:34:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.267 13:34:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.267 13:34:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.267 13:34:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.267 13:34:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.267 13:34:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.267 13:34:23 -- accel/accel.sh@42 -- # jq -r . 00:10:44.525 [2024-07-10 13:34:23.670710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:44.525 [2024-07-10 13:34:23.671472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109170 ] 00:10:44.525 [2024-07-10 13:34:23.835770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.092 [2024-07-10 13:34:24.151986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.352 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.352 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.352 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.352 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.352 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.352 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.352 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.352 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=0x1 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=0 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=software 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@23 -- # accel_module=software 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=32 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=32 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=1 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val=Yes 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:45.353 13:34:24 -- accel/accel.sh@21 -- # val= 00:10:45.353 13:34:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # IFS=: 00:10:45.353 13:34:24 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 13:34:26 -- accel/accel.sh@21 -- # val= 00:10:47.928 13:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # IFS=: 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 13:34:26 -- accel/accel.sh@21 -- # val= 00:10:47.928 13:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # IFS=: 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 13:34:26 -- accel/accel.sh@21 -- # val= 00:10:47.928 13:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # IFS=: 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 13:34:26 -- accel/accel.sh@21 -- # val= 00:10:47.928 13:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # IFS=: 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 13:34:26 -- accel/accel.sh@21 -- # val= 00:10:47.928 13:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # IFS=: 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 13:34:26 -- accel/accel.sh@21 -- # val= 00:10:47.928 13:34:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # IFS=: 00:10:47.928 13:34:26 -- accel/accel.sh@20 -- # read -r var val 00:10:47.928 ************************************ 00:10:47.928 END TEST accel_copy_crc32c_C2 00:10:47.928 ************************************ 00:10:47.928 13:34:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.928 13:34:26 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:47.928 13:34:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.928 00:10:47.928 real 0m6.352s 00:10:47.928 user 0m5.738s 00:10:47.928 sys 0m0.464s 00:10:47.928 13:34:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.928 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:10:47.928 13:34:26 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:47.928 13:34:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:47.928 13:34:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.928 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:10:47.928 ************************************ 00:10:47.928 START TEST accel_dualcast 00:10:47.928 ************************************ 00:10:47.928 13:34:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:47.928 13:34:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.928 13:34:26 -- accel/accel.sh@17 -- # local accel_module 00:10:47.928 13:34:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:47.928 13:34:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:47.928 13:34:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.928 13:34:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.928 13:34:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.928 13:34:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.928 13:34:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.928 13:34:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.928 13:34:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.928 13:34:26 -- accel/accel.sh@42 -- # jq -r . 00:10:47.928 [2024-07-10 13:34:26.926892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:47.928 [2024-07-10 13:34:26.927096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109247 ] 00:10:47.928 [2024-07-10 13:34:27.086439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.187 [2024-07-10 13:34:27.401687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.725 13:34:29 -- accel/accel.sh@18 -- # out=' 00:10:50.725 SPDK Configuration: 00:10:50.725 Core mask: 0x1 00:10:50.725 00:10:50.725 Accel Perf Configuration: 00:10:50.725 Workload Type: dualcast 00:10:50.725 Transfer size: 4096 bytes 00:10:50.725 Vector count 1 00:10:50.725 Module: software 00:10:50.725 Queue depth: 32 00:10:50.725 Allocate depth: 32 00:10:50.725 # threads/core: 1 00:10:50.725 Run time: 1 seconds 00:10:50.725 Verify: Yes 00:10:50.725 00:10:50.725 Running for 1 seconds... 00:10:50.725 00:10:50.725 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:50.725 ------------------------------------------------------------------------------------ 00:10:50.725 0,0 332352/s 1298 MiB/s 0 0 00:10:50.725 ==================================================================================== 00:10:50.725 Total 332352/s 1298 MiB/s 0 0' 00:10:50.725 13:34:29 -- accel/accel.sh@20 -- # IFS=: 00:10:50.725 13:34:29 -- accel/accel.sh@20 -- # read -r var val 00:10:50.725 13:34:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:50.725 13:34:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:50.725 13:34:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.725 13:34:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.725 13:34:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.725 13:34:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.725 13:34:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.725 13:34:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.725 13:34:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.725 13:34:29 -- accel/accel.sh@42 -- # jq -r . 00:10:50.725 [2024-07-10 13:34:30.003752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:50.725 [2024-07-10 13:34:30.004546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109293 ] 00:10:50.982 [2024-07-10 13:34:30.159357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.239 [2024-07-10 13:34:30.455915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=0x1 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=dualcast 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=software 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@23 -- # accel_module=software 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=32 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=32 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=1 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val=Yes 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:51.498 13:34:30 -- accel/accel.sh@21 -- # val= 00:10:51.498 13:34:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # IFS=: 00:10:51.498 13:34:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 13:34:32 -- accel/accel.sh@21 -- # val= 00:10:54.042 13:34:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # IFS=: 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 13:34:32 -- accel/accel.sh@21 -- # val= 00:10:54.042 13:34:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # IFS=: 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 13:34:32 -- accel/accel.sh@21 -- # val= 00:10:54.042 13:34:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # IFS=: 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 13:34:32 -- accel/accel.sh@21 -- # val= 00:10:54.042 13:34:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # IFS=: 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 13:34:32 -- accel/accel.sh@21 -- # val= 00:10:54.042 13:34:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # IFS=: 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 13:34:32 -- accel/accel.sh@21 -- # val= 00:10:54.042 13:34:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # IFS=: 00:10:54.042 13:34:32 -- accel/accel.sh@20 -- # read -r var val 00:10:54.042 ************************************ 00:10:54.042 END TEST accel_dualcast 00:10:54.042 ************************************ 00:10:54.042 13:34:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:54.042 13:34:32 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:54.042 13:34:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.042 00:10:54.042 real 0m5.995s 00:10:54.042 user 0m5.397s 00:10:54.042 sys 0m0.440s 00:10:54.042 13:34:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.042 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:10:54.042 13:34:32 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:54.042 13:34:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:54.042 13:34:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.042 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:10:54.042 ************************************ 00:10:54.042 START TEST accel_compare 00:10:54.042 ************************************ 00:10:54.042 13:34:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:54.042 13:34:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:54.042 13:34:32 -- accel/accel.sh@17 -- # local accel_module 00:10:54.042 13:34:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:54.042 13:34:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:54.042 13:34:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.042 13:34:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.042 13:34:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.042 13:34:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.042 13:34:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.042 13:34:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.042 13:34:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.042 13:34:32 -- accel/accel.sh@42 -- # jq -r . 00:10:54.042 [2024-07-10 13:34:32.982150] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:54.042 [2024-07-10 13:34:32.982480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109347 ] 00:10:54.042 [2024-07-10 13:34:33.153143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.300 [2024-07-10 13:34:33.429794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.833 13:34:35 -- accel/accel.sh@18 -- # out=' 00:10:56.833 SPDK Configuration: 00:10:56.833 Core mask: 0x1 00:10:56.833 00:10:56.833 Accel Perf Configuration: 00:10:56.833 Workload Type: compare 00:10:56.833 Transfer size: 4096 bytes 00:10:56.833 Vector count 1 00:10:56.833 Module: software 00:10:56.833 Queue depth: 32 00:10:56.833 Allocate depth: 32 00:10:56.833 # threads/core: 1 00:10:56.833 Run time: 1 seconds 00:10:56.833 Verify: Yes 00:10:56.833 00:10:56.833 Running for 1 seconds... 00:10:56.833 00:10:56.833 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:56.833 ------------------------------------------------------------------------------------ 00:10:56.833 0,0 543104/s 2121 MiB/s 0 0 00:10:56.833 ==================================================================================== 00:10:56.833 Total 543104/s 2121 MiB/s 0 0' 00:10:56.833 13:34:35 -- accel/accel.sh@20 -- # IFS=: 00:10:56.833 13:34:35 -- accel/accel.sh@20 -- # read -r var val 00:10:56.833 13:34:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:56.833 13:34:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:56.833 13:34:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.833 13:34:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.833 13:34:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.833 13:34:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.833 13:34:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.833 13:34:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.833 13:34:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.833 13:34:35 -- accel/accel.sh@42 -- # jq -r . 00:10:56.833 [2024-07-10 13:34:35.745921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:56.833 [2024-07-10 13:34:35.746157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109410 ] 00:10:56.833 [2024-07-10 13:34:35.909049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.833 [2024-07-10 13:34:36.137391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val=0x1 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val=compare 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.092 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.092 13:34:36 -- accel/accel.sh@21 -- # val=software 00:10:57.092 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val=32 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val=32 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val=1 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val=Yes 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:57.093 13:34:36 -- accel/accel.sh@21 -- # val= 00:10:57.093 13:34:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # IFS=: 00:10:57.093 13:34:36 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 13:34:38 -- accel/accel.sh@21 -- # val= 00:10:59.057 13:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # IFS=: 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 13:34:38 -- accel/accel.sh@21 -- # val= 00:10:59.057 13:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # IFS=: 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 13:34:38 -- accel/accel.sh@21 -- # val= 00:10:59.057 13:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # IFS=: 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 13:34:38 -- accel/accel.sh@21 -- # val= 00:10:59.057 13:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # IFS=: 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 13:34:38 -- accel/accel.sh@21 -- # val= 00:10:59.057 13:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # IFS=: 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 13:34:38 -- accel/accel.sh@21 -- # val= 00:10:59.057 13:34:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # IFS=: 00:10:59.057 13:34:38 -- accel/accel.sh@20 -- # read -r var val 00:10:59.057 ************************************ 00:10:59.057 END TEST accel_compare 00:10:59.057 ************************************ 00:10:59.057 13:34:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:59.057 13:34:38 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:59.057 13:34:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:59.057 00:10:59.057 real 0m5.474s 00:10:59.057 user 0m4.951s 00:10:59.057 sys 0m0.374s 00:10:59.057 13:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.057 13:34:38 -- common/autotest_common.sh@10 -- # set +x 00:10:59.317 13:34:38 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:59.317 13:34:38 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:59.317 13:34:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.317 13:34:38 -- common/autotest_common.sh@10 -- # set +x 00:10:59.317 ************************************ 00:10:59.317 START TEST accel_xor 00:10:59.317 ************************************ 00:10:59.317 13:34:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:59.317 13:34:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:59.317 13:34:38 -- accel/accel.sh@17 -- # local accel_module 00:10:59.317 13:34:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:59.317 13:34:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:59.317 13:34:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.317 13:34:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.317 13:34:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.317 13:34:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.317 13:34:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.317 13:34:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.317 13:34:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.317 13:34:38 -- accel/accel.sh@42 -- # jq -r . 00:10:59.317 [2024-07-10 13:34:38.511261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:59.317 [2024-07-10 13:34:38.511483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109457 ] 00:10:59.577 [2024-07-10 13:34:38.678337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.577 [2024-07-10 13:34:38.915481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.109 13:34:41 -- accel/accel.sh@18 -- # out=' 00:11:02.109 SPDK Configuration: 00:11:02.109 Core mask: 0x1 00:11:02.109 00:11:02.109 Accel Perf Configuration: 00:11:02.109 Workload Type: xor 00:11:02.109 Source buffers: 2 00:11:02.109 Transfer size: 4096 bytes 00:11:02.109 Vector count 1 00:11:02.109 Module: software 00:11:02.109 Queue depth: 32 00:11:02.109 Allocate depth: 32 00:11:02.109 # threads/core: 1 00:11:02.109 Run time: 1 seconds 00:11:02.109 Verify: Yes 00:11:02.109 00:11:02.109 Running for 1 seconds... 00:11:02.109 00:11:02.109 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:02.109 ------------------------------------------------------------------------------------ 00:11:02.109 0,0 345312/s 1348 MiB/s 0 0 00:11:02.109 ==================================================================================== 00:11:02.109 Total 345312/s 1348 MiB/s 0 0' 00:11:02.109 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.109 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.109 13:34:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:02.109 13:34:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:02.109 13:34:41 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.109 13:34:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.109 13:34:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.109 13:34:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.109 13:34:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.109 13:34:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.109 13:34:41 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.109 13:34:41 -- accel/accel.sh@42 -- # jq -r . 00:11:02.109 [2024-07-10 13:34:41.230916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:02.109 [2024-07-10 13:34:41.231160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109503 ] 00:11:02.109 [2024-07-10 13:34:41.389691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.368 [2024-07-10 13:34:41.642830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.637 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=0x1 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=xor 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=2 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=software 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=32 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=32 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=1 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val=Yes 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:02.638 13:34:41 -- accel/accel.sh@21 -- # val= 00:11:02.638 13:34:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # IFS=: 00:11:02.638 13:34:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 13:34:43 -- accel/accel.sh@21 -- # val= 00:11:05.174 13:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 13:34:43 -- accel/accel.sh@21 -- # val= 00:11:05.174 13:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 13:34:43 -- accel/accel.sh@21 -- # val= 00:11:05.174 13:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 13:34:43 -- accel/accel.sh@21 -- # val= 00:11:05.174 13:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 13:34:43 -- accel/accel.sh@21 -- # val= 00:11:05.174 13:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 13:34:43 -- accel/accel.sh@21 -- # val= 00:11:05.174 13:34:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # IFS=: 00:11:05.174 13:34:43 -- accel/accel.sh@20 -- # read -r var val 00:11:05.174 ************************************ 00:11:05.174 END TEST accel_xor 00:11:05.174 ************************************ 00:11:05.174 13:34:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.174 13:34:43 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:05.174 13:34:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.174 00:11:05.174 real 0m5.503s 00:11:05.174 user 0m4.969s 00:11:05.174 sys 0m0.368s 00:11:05.174 13:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.174 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 13:34:44 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:05.174 13:34:44 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:05.174 13:34:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.174 13:34:44 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 ************************************ 00:11:05.174 START TEST accel_xor 00:11:05.174 ************************************ 00:11:05.174 13:34:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:05.174 13:34:44 -- accel/accel.sh@16 -- # local accel_opc 00:11:05.174 13:34:44 -- accel/accel.sh@17 -- # local accel_module 00:11:05.174 13:34:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:05.174 13:34:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:05.174 13:34:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.174 13:34:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.174 13:34:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.174 13:34:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.174 13:34:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.174 13:34:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.174 13:34:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.174 13:34:44 -- accel/accel.sh@42 -- # jq -r . 00:11:05.174 [2024-07-10 13:34:44.081127] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:05.174 [2024-07-10 13:34:44.081367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109557 ] 00:11:05.174 [2024-07-10 13:34:44.239857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.174 [2024-07-10 13:34:44.488873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.734 13:34:46 -- accel/accel.sh@18 -- # out=' 00:11:07.734 SPDK Configuration: 00:11:07.734 Core mask: 0x1 00:11:07.734 00:11:07.734 Accel Perf Configuration: 00:11:07.734 Workload Type: xor 00:11:07.734 Source buffers: 3 00:11:07.734 Transfer size: 4096 bytes 00:11:07.734 Vector count 1 00:11:07.734 Module: software 00:11:07.734 Queue depth: 32 00:11:07.734 Allocate depth: 32 00:11:07.734 # threads/core: 1 00:11:07.734 Run time: 1 seconds 00:11:07.734 Verify: Yes 00:11:07.734 00:11:07.734 Running for 1 seconds... 00:11:07.734 00:11:07.734 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:07.734 ------------------------------------------------------------------------------------ 00:11:07.734 0,0 289152/s 1129 MiB/s 0 0 00:11:07.734 ==================================================================================== 00:11:07.734 Total 289152/s 1129 MiB/s 0 0' 00:11:07.734 13:34:46 -- accel/accel.sh@20 -- # IFS=: 00:11:07.734 13:34:46 -- accel/accel.sh@20 -- # read -r var val 00:11:07.734 13:34:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:07.734 13:34:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:07.734 13:34:46 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.734 13:34:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.734 13:34:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.734 13:34:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.734 13:34:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.734 13:34:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.734 13:34:46 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.734 13:34:46 -- accel/accel.sh@42 -- # jq -r . 00:11:07.734 [2024-07-10 13:34:46.853032] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:07.734 [2024-07-10 13:34:46.853304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109618 ] 00:11:07.734 [2024-07-10 13:34:47.009775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.993 [2024-07-10 13:34:47.274347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.251 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.251 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.251 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.251 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.251 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.251 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.251 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.251 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.251 13:34:47 -- accel/accel.sh@21 -- # val=0x1 00:11:08.251 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.251 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.251 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=xor 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=3 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=software 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=32 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=32 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=1 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val=Yes 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:08.252 13:34:47 -- accel/accel.sh@21 -- # val= 00:11:08.252 13:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # IFS=: 00:11:08.252 13:34:47 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 13:34:49 -- accel/accel.sh@21 -- # val= 00:11:10.783 13:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 13:34:49 -- accel/accel.sh@21 -- # val= 00:11:10.783 13:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 13:34:49 -- accel/accel.sh@21 -- # val= 00:11:10.783 13:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 13:34:49 -- accel/accel.sh@21 -- # val= 00:11:10.783 13:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 13:34:49 -- accel/accel.sh@21 -- # val= 00:11:10.783 13:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 13:34:49 -- accel/accel.sh@21 -- # val= 00:11:10.783 13:34:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # IFS=: 00:11:10.783 13:34:49 -- accel/accel.sh@20 -- # read -r var val 00:11:10.783 ************************************ 00:11:10.783 END TEST accel_xor 00:11:10.783 ************************************ 00:11:10.783 13:34:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.783 13:34:49 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:10.783 13:34:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.783 00:11:10.783 real 0m5.564s 00:11:10.783 user 0m5.066s 00:11:10.783 sys 0m0.331s 00:11:10.783 13:34:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.783 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:11:10.783 13:34:49 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:10.783 13:34:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:10.783 13:34:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.783 13:34:49 -- common/autotest_common.sh@10 -- # set +x 00:11:10.783 ************************************ 00:11:10.783 START TEST accel_dif_verify 00:11:10.783 ************************************ 00:11:10.783 13:34:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:10.783 13:34:49 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.783 13:34:49 -- accel/accel.sh@17 -- # local accel_module 00:11:10.783 13:34:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:10.783 13:34:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:10.783 13:34:49 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.783 13:34:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.783 13:34:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.783 13:34:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.783 13:34:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.783 13:34:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.783 13:34:49 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.783 13:34:49 -- accel/accel.sh@42 -- # jq -r . 00:11:10.783 [2024-07-10 13:34:49.718511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:10.783 [2024-07-10 13:34:49.718772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109672 ] 00:11:10.783 [2024-07-10 13:34:49.896354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.783 [2024-07-10 13:34:50.137252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.334 13:34:52 -- accel/accel.sh@18 -- # out=' 00:11:13.334 SPDK Configuration: 00:11:13.334 Core mask: 0x1 00:11:13.334 00:11:13.334 Accel Perf Configuration: 00:11:13.334 Workload Type: dif_verify 00:11:13.334 Vector size: 4096 bytes 00:11:13.334 Transfer size: 4096 bytes 00:11:13.334 Block size: 512 bytes 00:11:13.334 Metadata size: 8 bytes 00:11:13.334 Vector count 1 00:11:13.334 Module: software 00:11:13.334 Queue depth: 32 00:11:13.334 Allocate depth: 32 00:11:13.334 # threads/core: 1 00:11:13.334 Run time: 1 seconds 00:11:13.334 Verify: No 00:11:13.334 00:11:13.334 Running for 1 seconds... 00:11:13.334 00:11:13.334 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:13.334 ------------------------------------------------------------------------------------ 00:11:13.334 0,0 116544/s 462 MiB/s 0 0 00:11:13.334 ==================================================================================== 00:11:13.334 Total 116544/s 455 MiB/s 0 0' 00:11:13.334 13:34:52 -- accel/accel.sh@20 -- # IFS=: 00:11:13.334 13:34:52 -- accel/accel.sh@20 -- # read -r var val 00:11:13.334 13:34:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:13.334 13:34:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:13.334 13:34:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.334 13:34:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.334 13:34:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.334 13:34:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.334 13:34:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.334 13:34:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.334 13:34:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.334 13:34:52 -- accel/accel.sh@42 -- # jq -r . 00:11:13.334 [2024-07-10 13:34:52.591704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:13.335 [2024-07-10 13:34:52.591944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109719 ] 00:11:13.594 [2024-07-10 13:34:52.754030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.852 [2024-07-10 13:34:52.993658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=0x1 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=dif_verify 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=software 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@23 -- # accel_module=software 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=32 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=32 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=1 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val=No 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:14.110 13:34:53 -- accel/accel.sh@21 -- # val= 00:11:14.110 13:34:53 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # IFS=: 00:11:14.110 13:34:53 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 13:34:55 -- accel/accel.sh@21 -- # val= 00:11:16.027 13:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # IFS=: 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 13:34:55 -- accel/accel.sh@21 -- # val= 00:11:16.027 13:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # IFS=: 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 13:34:55 -- accel/accel.sh@21 -- # val= 00:11:16.027 13:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # IFS=: 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 13:34:55 -- accel/accel.sh@21 -- # val= 00:11:16.027 13:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # IFS=: 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 13:34:55 -- accel/accel.sh@21 -- # val= 00:11:16.027 13:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # IFS=: 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 13:34:55 -- accel/accel.sh@21 -- # val= 00:11:16.027 13:34:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # IFS=: 00:11:16.027 13:34:55 -- accel/accel.sh@20 -- # read -r var val 00:11:16.027 ************************************ 00:11:16.027 END TEST accel_dif_verify 00:11:16.027 ************************************ 00:11:16.027 13:34:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:16.027 13:34:55 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:16.027 13:34:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:16.027 00:11:16.027 real 0m5.591s 00:11:16.027 user 0m5.141s 00:11:16.027 sys 0m0.299s 00:11:16.027 13:34:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.027 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.027 13:34:55 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:16.027 13:34:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:16.027 13:34:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.027 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.027 ************************************ 00:11:16.027 START TEST accel_dif_generate 00:11:16.027 ************************************ 00:11:16.027 13:34:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:16.027 13:34:55 -- accel/accel.sh@16 -- # local accel_opc 00:11:16.027 13:34:55 -- accel/accel.sh@17 -- # local accel_module 00:11:16.027 13:34:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:16.027 13:34:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:16.027 13:34:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.027 13:34:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.027 13:34:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.027 13:34:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.027 13:34:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.027 13:34:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.027 13:34:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.027 13:34:55 -- accel/accel.sh@42 -- # jq -r . 00:11:16.027 [2024-07-10 13:34:55.328516] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:16.027 [2024-07-10 13:34:55.330164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109789 ] 00:11:16.285 [2024-07-10 13:34:55.484758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.542 [2024-07-10 13:34:55.753617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.074 13:34:58 -- accel/accel.sh@18 -- # out=' 00:11:19.074 SPDK Configuration: 00:11:19.074 Core mask: 0x1 00:11:19.074 00:11:19.074 Accel Perf Configuration: 00:11:19.074 Workload Type: dif_generate 00:11:19.074 Vector size: 4096 bytes 00:11:19.074 Transfer size: 4096 bytes 00:11:19.074 Block size: 512 bytes 00:11:19.074 Metadata size: 8 bytes 00:11:19.074 Vector count 1 00:11:19.074 Module: software 00:11:19.074 Queue depth: 32 00:11:19.074 Allocate depth: 32 00:11:19.074 # threads/core: 1 00:11:19.074 Run time: 1 seconds 00:11:19.074 Verify: No 00:11:19.074 00:11:19.074 Running for 1 seconds... 00:11:19.074 00:11:19.074 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:19.074 ------------------------------------------------------------------------------------ 00:11:19.074 0,0 126624/s 502 MiB/s 0 0 00:11:19.074 ==================================================================================== 00:11:19.074 Total 126624/s 494 MiB/s 0 0' 00:11:19.074 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.074 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.074 13:34:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:19.074 13:34:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.074 13:34:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:19.074 13:34:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.074 13:34:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.074 13:34:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.074 13:34:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.074 13:34:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.074 13:34:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.074 13:34:58 -- accel/accel.sh@42 -- # jq -r . 00:11:19.074 [2024-07-10 13:34:58.117400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:19.074 [2024-07-10 13:34:58.118224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109832 ] 00:11:19.074 [2024-07-10 13:34:58.278686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.333 [2024-07-10 13:34:58.528314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.592 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=0x1 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=dif_generate 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=software 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@23 -- # accel_module=software 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=32 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=32 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=1 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val=No 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:19.593 13:34:58 -- accel/accel.sh@21 -- # val= 00:11:19.593 13:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # IFS=: 00:11:19.593 13:34:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 13:35:00 -- accel/accel.sh@21 -- # val= 00:11:22.131 13:35:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # IFS=: 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 13:35:00 -- accel/accel.sh@21 -- # val= 00:11:22.131 13:35:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # IFS=: 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 13:35:00 -- accel/accel.sh@21 -- # val= 00:11:22.131 13:35:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # IFS=: 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 13:35:00 -- accel/accel.sh@21 -- # val= 00:11:22.131 13:35:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # IFS=: 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 13:35:00 -- accel/accel.sh@21 -- # val= 00:11:22.131 13:35:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # IFS=: 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 13:35:00 -- accel/accel.sh@21 -- # val= 00:11:22.131 13:35:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # IFS=: 00:11:22.131 13:35:00 -- accel/accel.sh@20 -- # read -r var val 00:11:22.131 ************************************ 00:11:22.131 END TEST accel_dif_generate 00:11:22.131 ************************************ 00:11:22.131 13:35:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.131 13:35:00 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:22.131 13:35:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.131 00:11:22.131 real 0m5.645s 00:11:22.131 user 0m5.185s 00:11:22.131 sys 0m0.309s 00:11:22.132 13:35:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.132 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 13:35:00 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:22.132 13:35:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:22.132 13:35:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.132 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:22.132 ************************************ 00:11:22.132 START TEST accel_dif_generate_copy 00:11:22.132 ************************************ 00:11:22.132 13:35:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:22.132 13:35:00 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.132 13:35:00 -- accel/accel.sh@17 -- # local accel_module 00:11:22.132 13:35:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:22.132 13:35:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:22.132 13:35:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.132 13:35:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.132 13:35:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.132 13:35:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.132 13:35:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.132 13:35:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.132 13:35:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.132 13:35:00 -- accel/accel.sh@42 -- # jq -r . 00:11:22.132 [2024-07-10 13:35:01.043595] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:22.132 [2024-07-10 13:35:01.043839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109886 ] 00:11:22.132 [2024-07-10 13:35:01.206974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.132 [2024-07-10 13:35:01.456348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.668 13:35:03 -- accel/accel.sh@18 -- # out=' 00:11:24.668 SPDK Configuration: 00:11:24.668 Core mask: 0x1 00:11:24.668 00:11:24.668 Accel Perf Configuration: 00:11:24.668 Workload Type: dif_generate_copy 00:11:24.668 Vector size: 4096 bytes 00:11:24.668 Transfer size: 4096 bytes 00:11:24.668 Vector count 1 00:11:24.668 Module: software 00:11:24.668 Queue depth: 32 00:11:24.668 Allocate depth: 32 00:11:24.668 # threads/core: 1 00:11:24.668 Run time: 1 seconds 00:11:24.668 Verify: No 00:11:24.668 00:11:24.668 Running for 1 seconds... 00:11:24.668 00:11:24.668 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:24.668 ------------------------------------------------------------------------------------ 00:11:24.668 0,0 105536/s 418 MiB/s 0 0 00:11:24.668 ==================================================================================== 00:11:24.668 Total 105536/s 412 MiB/s 0 0' 00:11:24.668 13:35:03 -- accel/accel.sh@20 -- # IFS=: 00:11:24.668 13:35:03 -- accel/accel.sh@20 -- # read -r var val 00:11:24.668 13:35:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:24.668 13:35:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:24.668 13:35:03 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.668 13:35:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.668 13:35:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.668 13:35:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.668 13:35:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.668 13:35:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.668 13:35:03 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.668 13:35:03 -- accel/accel.sh@42 -- # jq -r . 00:11:24.668 [2024-07-10 13:35:03.827488] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:24.668 [2024-07-10 13:35:03.827723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109927 ] 00:11:24.668 [2024-07-10 13:35:03.985452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.928 [2024-07-10 13:35:04.253696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=0x1 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=software 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@23 -- # accel_module=software 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=32 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=32 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=1 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val=No 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:25.187 13:35:04 -- accel/accel.sh@21 -- # val= 00:11:25.187 13:35:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # IFS=: 00:11:25.187 13:35:04 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 13:35:06 -- accel/accel.sh@21 -- # val= 00:11:27.727 13:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # IFS=: 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 13:35:06 -- accel/accel.sh@21 -- # val= 00:11:27.727 13:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # IFS=: 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 13:35:06 -- accel/accel.sh@21 -- # val= 00:11:27.727 13:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # IFS=: 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 13:35:06 -- accel/accel.sh@21 -- # val= 00:11:27.727 13:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # IFS=: 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 13:35:06 -- accel/accel.sh@21 -- # val= 00:11:27.727 13:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # IFS=: 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 13:35:06 -- accel/accel.sh@21 -- # val= 00:11:27.727 13:35:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # IFS=: 00:11:27.727 13:35:06 -- accel/accel.sh@20 -- # read -r var val 00:11:27.727 ************************************ 00:11:27.727 END TEST accel_dif_generate_copy 00:11:27.727 ************************************ 00:11:27.727 13:35:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:27.727 13:35:06 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:27.727 13:35:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:27.727 00:11:27.727 real 0m5.868s 00:11:27.727 user 0m5.311s 00:11:27.727 sys 0m0.384s 00:11:27.727 13:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.727 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 13:35:06 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:27.727 13:35:06 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.727 13:35:06 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:27.727 13:35:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.727 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 ************************************ 00:11:27.727 START TEST accel_comp 00:11:27.728 ************************************ 00:11:27.728 13:35:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.728 13:35:06 -- accel/accel.sh@16 -- # local accel_opc 00:11:27.728 13:35:06 -- accel/accel.sh@17 -- # local accel_module 00:11:27.728 13:35:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.728 13:35:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.728 13:35:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.728 13:35:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.728 13:35:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.728 13:35:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.728 13:35:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.728 13:35:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.728 13:35:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.728 13:35:06 -- accel/accel.sh@42 -- # jq -r . 00:11:27.728 [2024-07-10 13:35:06.978950] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:27.728 [2024-07-10 13:35:06.979187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110003 ] 00:11:27.987 [2024-07-10 13:35:07.143579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.246 [2024-07-10 13:35:07.475782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.783 13:35:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:30.783 00:11:30.783 SPDK Configuration: 00:11:30.783 Core mask: 0x1 00:11:30.783 00:11:30.783 Accel Perf Configuration: 00:11:30.783 Workload Type: compress 00:11:30.783 Transfer size: 4096 bytes 00:11:30.783 Vector count 1 00:11:30.783 Module: software 00:11:30.783 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:30.783 Queue depth: 32 00:11:30.783 Allocate depth: 32 00:11:30.783 # threads/core: 1 00:11:30.783 Run time: 1 seconds 00:11:30.783 Verify: No 00:11:30.783 00:11:30.783 Running for 1 seconds... 00:11:30.783 00:11:30.783 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:30.783 ------------------------------------------------------------------------------------ 00:11:30.783 0,0 48032/s 200 MiB/s 0 0 00:11:30.783 ==================================================================================== 00:11:30.783 Total 48032/s 187 MiB/s 0 0' 00:11:30.783 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:30.783 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:30.783 13:35:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:30.783 13:35:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:30.783 13:35:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:30.783 13:35:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:30.783 13:35:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.783 13:35:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.783 13:35:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:30.783 13:35:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:30.783 13:35:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:30.783 13:35:10 -- accel/accel.sh@42 -- # jq -r . 00:11:30.783 [2024-07-10 13:35:10.055848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:30.783 [2024-07-10 13:35:10.056067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110050 ] 00:11:31.043 [2024-07-10 13:35:10.217305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.302 [2024-07-10 13:35:10.485848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=0x1 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=compress 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=software 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=32 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=32 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=1 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val=No 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:31.562 13:35:10 -- accel/accel.sh@21 -- # val= 00:11:31.562 13:35:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # IFS=: 00:11:31.562 13:35:10 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 13:35:12 -- accel/accel.sh@21 -- # val= 00:11:34.100 13:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 13:35:12 -- accel/accel.sh@21 -- # val= 00:11:34.100 13:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 13:35:12 -- accel/accel.sh@21 -- # val= 00:11:34.100 13:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 13:35:12 -- accel/accel.sh@21 -- # val= 00:11:34.100 13:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 13:35:12 -- accel/accel.sh@21 -- # val= 00:11:34.100 13:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 13:35:12 -- accel/accel.sh@21 -- # val= 00:11:34.100 13:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 13:35:12 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 ************************************ 00:11:34.100 END TEST accel_comp 00:11:34.100 ************************************ 00:11:34.100 13:35:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:34.100 13:35:12 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:34.100 13:35:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:34.100 00:11:34.100 real 0m5.957s 00:11:34.100 user 0m5.382s 00:11:34.100 sys 0m0.409s 00:11:34.100 13:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.100 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:34.100 13:35:12 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:34.100 13:35:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:34.100 13:35:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.100 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:34.100 ************************************ 00:11:34.100 START TEST accel_decomp 00:11:34.100 ************************************ 00:11:34.100 13:35:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:34.100 13:35:12 -- accel/accel.sh@16 -- # local accel_opc 00:11:34.100 13:35:12 -- accel/accel.sh@17 -- # local accel_module 00:11:34.100 13:35:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:34.100 13:35:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.100 13:35:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.100 13:35:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.100 13:35:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:34.100 13:35:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.100 13:35:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.100 13:35:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.100 13:35:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.100 13:35:12 -- accel/accel.sh@42 -- # jq -r . 00:11:34.100 [2024-07-10 13:35:12.978449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:34.100 [2024-07-10 13:35:12.979016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110103 ] 00:11:34.100 [2024-07-10 13:35:13.141475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.100 [2024-07-10 13:35:13.402306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.634 13:35:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:36.634 00:11:36.634 SPDK Configuration: 00:11:36.634 Core mask: 0x1 00:11:36.634 00:11:36.634 Accel Perf Configuration: 00:11:36.635 Workload Type: decompress 00:11:36.635 Transfer size: 4096 bytes 00:11:36.635 Vector count 1 00:11:36.635 Module: software 00:11:36.635 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.635 Queue depth: 32 00:11:36.635 Allocate depth: 32 00:11:36.635 # threads/core: 1 00:11:36.635 Run time: 1 seconds 00:11:36.635 Verify: Yes 00:11:36.635 00:11:36.635 Running for 1 seconds... 00:11:36.635 00:11:36.635 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.635 ------------------------------------------------------------------------------------ 00:11:36.635 0,0 54560/s 100 MiB/s 0 0 00:11:36.635 ==================================================================================== 00:11:36.635 Total 54560/s 213 MiB/s 0 0' 00:11:36.635 13:35:15 -- accel/accel.sh@20 -- # IFS=: 00:11:36.635 13:35:15 -- accel/accel.sh@20 -- # read -r var val 00:11:36.635 13:35:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:36.635 13:35:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:36.635 13:35:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.635 13:35:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.635 13:35:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.635 13:35:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.635 13:35:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.635 13:35:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.635 13:35:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.635 13:35:15 -- accel/accel.sh@42 -- # jq -r . 00:11:36.635 [2024-07-10 13:35:15.940579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:36.635 [2024-07-10 13:35:15.940890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110167 ] 00:11:36.894 [2024-07-10 13:35:16.120502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.152 [2024-07-10 13:35:16.383333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val=0x1 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.410 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.410 13:35:16 -- accel/accel.sh@21 -- # val=decompress 00:11:37.410 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.410 13:35:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val=software 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@23 -- # accel_module=software 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val=32 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val=32 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val=1 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val=Yes 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:37.411 13:35:16 -- accel/accel.sh@21 -- # val= 00:11:37.411 13:35:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # IFS=: 00:11:37.411 13:35:16 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 13:35:18 -- accel/accel.sh@21 -- # val= 00:11:39.940 13:35:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # IFS=: 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 13:35:18 -- accel/accel.sh@21 -- # val= 00:11:39.940 13:35:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # IFS=: 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 13:35:18 -- accel/accel.sh@21 -- # val= 00:11:39.940 13:35:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # IFS=: 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 13:35:18 -- accel/accel.sh@21 -- # val= 00:11:39.940 13:35:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # IFS=: 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 13:35:18 -- accel/accel.sh@21 -- # val= 00:11:39.940 13:35:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # IFS=: 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 13:35:18 -- accel/accel.sh@21 -- # val= 00:11:39.940 13:35:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # IFS=: 00:11:39.940 13:35:18 -- accel/accel.sh@20 -- # read -r var val 00:11:39.940 ************************************ 00:11:39.940 END TEST accel_decomp 00:11:39.940 ************************************ 00:11:39.940 13:35:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:39.940 13:35:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:39.940 13:35:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.940 00:11:39.940 real 0m5.941s 00:11:39.940 user 0m5.411s 00:11:39.940 sys 0m0.369s 00:11:39.940 13:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.940 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:11:39.940 13:35:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.940 13:35:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:39.940 13:35:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.940 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:11:39.940 ************************************ 00:11:39.940 START TEST accel_decmop_full 00:11:39.940 ************************************ 00:11:39.940 13:35:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.940 13:35:18 -- accel/accel.sh@16 -- # local accel_opc 00:11:39.940 13:35:18 -- accel/accel.sh@17 -- # local accel_module 00:11:39.940 13:35:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.940 13:35:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.940 13:35:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.940 13:35:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.940 13:35:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.940 13:35:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.940 13:35:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.940 13:35:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.940 13:35:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.940 13:35:18 -- accel/accel.sh@42 -- # jq -r . 00:11:39.940 [2024-07-10 13:35:18.992441] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:39.940 [2024-07-10 13:35:18.993211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110226 ] 00:11:39.940 [2024-07-10 13:35:19.165179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.199 [2024-07-10 13:35:19.426732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.733 13:35:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:42.733 00:11:42.733 SPDK Configuration: 00:11:42.733 Core mask: 0x1 00:11:42.733 00:11:42.733 Accel Perf Configuration: 00:11:42.733 Workload Type: decompress 00:11:42.733 Transfer size: 111250 bytes 00:11:42.733 Vector count 1 00:11:42.733 Module: software 00:11:42.733 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:42.733 Queue depth: 32 00:11:42.733 Allocate depth: 32 00:11:42.733 # threads/core: 1 00:11:42.733 Run time: 1 seconds 00:11:42.733 Verify: Yes 00:11:42.733 00:11:42.733 Running for 1 seconds... 00:11:42.733 00:11:42.733 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:42.733 ------------------------------------------------------------------------------------ 00:11:42.733 0,0 3808/s 157 MiB/s 0 0 00:11:42.733 ==================================================================================== 00:11:42.733 Total 3808/s 404 MiB/s 0 0' 00:11:42.733 13:35:21 -- accel/accel.sh@20 -- # IFS=: 00:11:42.733 13:35:21 -- accel/accel.sh@20 -- # read -r var val 00:11:42.733 13:35:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:42.733 13:35:21 -- accel/accel.sh@12 -- # build_accel_config 00:11:42.733 13:35:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:42.733 13:35:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:42.733 13:35:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.733 13:35:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.733 13:35:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:42.733 13:35:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:42.733 13:35:21 -- accel/accel.sh@41 -- # local IFS=, 00:11:42.733 13:35:21 -- accel/accel.sh@42 -- # jq -r . 00:11:42.733 [2024-07-10 13:35:21.905915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:42.733 [2024-07-10 13:35:21.906573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110271 ] 00:11:42.733 [2024-07-10 13:35:22.070804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.006 [2024-07-10 13:35:22.337381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=0x1 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=decompress 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=software 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@23 -- # accel_module=software 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=32 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=32 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=1 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val=Yes 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:43.266 13:35:22 -- accel/accel.sh@21 -- # val= 00:11:43.266 13:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # IFS=: 00:11:43.266 13:35:22 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 13:35:24 -- accel/accel.sh@21 -- # val= 00:11:45.792 13:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # IFS=: 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 13:35:24 -- accel/accel.sh@21 -- # val= 00:11:45.792 13:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # IFS=: 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 13:35:24 -- accel/accel.sh@21 -- # val= 00:11:45.792 13:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # IFS=: 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 13:35:24 -- accel/accel.sh@21 -- # val= 00:11:45.792 13:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # IFS=: 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 13:35:24 -- accel/accel.sh@21 -- # val= 00:11:45.792 13:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # IFS=: 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 13:35:24 -- accel/accel.sh@21 -- # val= 00:11:45.792 13:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # IFS=: 00:11:45.792 13:35:24 -- accel/accel.sh@20 -- # read -r var val 00:11:45.792 ************************************ 00:11:45.792 END TEST accel_decmop_full 00:11:45.792 ************************************ 00:11:45.792 13:35:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:45.792 13:35:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:45.792 13:35:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:45.792 00:11:45.792 real 0m5.880s 00:11:45.792 user 0m5.370s 00:11:45.792 sys 0m0.355s 00:11:45.792 13:35:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.792 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:45.792 13:35:24 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:45.792 13:35:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:45.792 13:35:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.792 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:11:45.792 ************************************ 00:11:45.792 START TEST accel_decomp_mcore 00:11:45.792 ************************************ 00:11:45.792 13:35:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:45.792 13:35:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:45.792 13:35:24 -- accel/accel.sh@17 -- # local accel_module 00:11:45.792 13:35:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:45.792 13:35:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.792 13:35:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:45.792 13:35:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.792 13:35:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.792 13:35:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.792 13:35:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.792 13:35:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.792 13:35:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.792 13:35:24 -- accel/accel.sh@42 -- # jq -r . 00:11:45.792 [2024-07-10 13:35:24.911085] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:45.792 [2024-07-10 13:35:24.911360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110326 ] 00:11:45.792 [2024-07-10 13:35:25.086585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.051 [2024-07-10 13:35:25.353025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.051 [2024-07-10 13:35:25.353203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.051 [2024-07-10 13:35:25.353130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.051 [2024-07-10 13:35:25.353212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.582 13:35:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:48.582 00:11:48.582 SPDK Configuration: 00:11:48.582 Core mask: 0xf 00:11:48.582 00:11:48.582 Accel Perf Configuration: 00:11:48.582 Workload Type: decompress 00:11:48.582 Transfer size: 4096 bytes 00:11:48.582 Vector count 1 00:11:48.582 Module: software 00:11:48.582 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.582 Queue depth: 32 00:11:48.582 Allocate depth: 32 00:11:48.582 # threads/core: 1 00:11:48.582 Run time: 1 seconds 00:11:48.582 Verify: Yes 00:11:48.582 00:11:48.582 Running for 1 seconds... 00:11:48.582 00:11:48.582 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:48.582 ------------------------------------------------------------------------------------ 00:11:48.582 0,0 48224/s 88 MiB/s 0 0 00:11:48.582 3,0 46048/s 84 MiB/s 0 0 00:11:48.582 2,0 49024/s 90 MiB/s 0 0 00:11:48.582 1,0 49088/s 90 MiB/s 0 0 00:11:48.582 ==================================================================================== 00:11:48.582 Total 192384/s 751 MiB/s 0 0' 00:11:48.582 13:35:27 -- accel/accel.sh@20 -- # IFS=: 00:11:48.582 13:35:27 -- accel/accel.sh@20 -- # read -r var val 00:11:48.582 13:35:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:48.582 13:35:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:48.582 13:35:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.582 13:35:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.582 13:35:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.582 13:35:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.582 13:35:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.582 13:35:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.582 13:35:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.582 13:35:27 -- accel/accel.sh@42 -- # jq -r . 00:11:48.840 [2024-07-10 13:35:27.948265] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:48.840 [2024-07-10 13:35:27.948515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110395 ] 00:11:48.840 [2024-07-10 13:35:28.123107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.097 [2024-07-10 13:35:28.398871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.097 [2024-07-10 13:35:28.398914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.097 [2024-07-10 13:35:28.399041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.097 [2024-07-10 13:35:28.399049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=0xf 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=decompress 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=software 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@23 -- # accel_module=software 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=32 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=32 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=1 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val=Yes 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:49.356 13:35:28 -- accel/accel.sh@21 -- # val= 00:11:49.356 13:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # IFS=: 00:11:49.356 13:35:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 13:35:30 -- accel/accel.sh@21 -- # val= 00:11:51.886 13:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # IFS=: 00:11:51.886 13:35:30 -- accel/accel.sh@20 -- # read -r var val 00:11:51.886 ************************************ 00:11:51.886 END TEST accel_decomp_mcore 00:11:51.886 ************************************ 00:11:51.886 13:35:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:51.886 13:35:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:51.886 13:35:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:51.886 00:11:51.886 real 0m5.978s 00:11:51.886 user 0m17.281s 00:11:51.886 sys 0m0.421s 00:11:51.886 13:35:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.886 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.886 13:35:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:51.886 13:35:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:51.886 13:35:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:51.886 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.886 ************************************ 00:11:51.886 START TEST accel_decomp_full_mcore 00:11:51.886 ************************************ 00:11:51.886 13:35:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:51.886 13:35:30 -- accel/accel.sh@16 -- # local accel_opc 00:11:51.886 13:35:30 -- accel/accel.sh@17 -- # local accel_module 00:11:51.886 13:35:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:51.886 13:35:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:51.886 13:35:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:51.886 13:35:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:51.886 13:35:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.886 13:35:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.886 13:35:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:51.886 13:35:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:51.886 13:35:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:51.886 13:35:30 -- accel/accel.sh@42 -- # jq -r . 00:11:51.886 [2024-07-10 13:35:30.943054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:51.886 [2024-07-10 13:35:30.943248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110457 ] 00:11:51.886 [2024-07-10 13:35:31.114329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.144 [2024-07-10 13:35:31.366945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.144 [2024-07-10 13:35:31.367051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.144 [2024-07-10 13:35:31.367241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.144 [2024-07-10 13:35:31.367249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.675 13:35:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:54.675 00:11:54.675 SPDK Configuration: 00:11:54.675 Core mask: 0xf 00:11:54.675 00:11:54.675 Accel Perf Configuration: 00:11:54.675 Workload Type: decompress 00:11:54.675 Transfer size: 111250 bytes 00:11:54.675 Vector count 1 00:11:54.675 Module: software 00:11:54.675 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.675 Queue depth: 32 00:11:54.675 Allocate depth: 32 00:11:54.675 # threads/core: 1 00:11:54.675 Run time: 1 seconds 00:11:54.675 Verify: Yes 00:11:54.675 00:11:54.675 Running for 1 seconds... 00:11:54.675 00:11:54.675 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:54.675 ------------------------------------------------------------------------------------ 00:11:54.675 0,0 4128/s 170 MiB/s 0 0 00:11:54.675 3,0 4160/s 171 MiB/s 0 0 00:11:54.675 2,0 4064/s 167 MiB/s 0 0 00:11:54.675 1,0 4096/s 169 MiB/s 0 0 00:11:54.675 ==================================================================================== 00:11:54.675 Total 16448/s 1745 MiB/s 0 0' 00:11:54.675 13:35:33 -- accel/accel.sh@20 -- # IFS=: 00:11:54.675 13:35:33 -- accel/accel.sh@20 -- # read -r var val 00:11:54.675 13:35:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:54.675 13:35:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:54.675 13:35:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:54.675 13:35:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:54.675 13:35:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.675 13:35:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.675 13:35:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:54.675 13:35:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:54.675 13:35:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:54.675 13:35:33 -- accel/accel.sh@42 -- # jq -r . 00:11:54.675 [2024-07-10 13:35:33.906306] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:54.675 [2024-07-10 13:35:33.906605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110501 ] 00:11:54.933 [2024-07-10 13:35:34.100288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.192 [2024-07-10 13:35:34.357979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.192 [2024-07-10 13:35:34.358114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.192 [2024-07-10 13:35:34.358267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.192 [2024-07-10 13:35:34.358283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=0xf 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=decompress 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=software 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@23 -- # accel_module=software 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=32 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=32 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=1 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val=Yes 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:55.451 13:35:34 -- accel/accel.sh@21 -- # val= 00:11:55.451 13:35:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # IFS=: 00:11:55.451 13:35:34 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 13:35:36 -- accel/accel.sh@21 -- # val= 00:11:57.995 13:35:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # IFS=: 00:11:57.995 13:35:36 -- accel/accel.sh@20 -- # read -r var val 00:11:57.995 ************************************ 00:11:57.995 END TEST accel_decomp_full_mcore 00:11:57.995 ************************************ 00:11:57.995 13:35:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:57.995 13:35:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:57.995 13:35:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.995 00:11:57.995 real 0m6.017s 00:11:57.995 user 0m17.699s 00:11:57.995 sys 0m0.401s 00:11:57.995 13:35:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.995 13:35:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 13:35:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:57.995 13:35:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:57.995 13:35:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.995 13:35:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.995 ************************************ 00:11:57.995 START TEST accel_decomp_mthread 00:11:57.995 ************************************ 00:11:57.995 13:35:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:57.995 13:35:36 -- accel/accel.sh@16 -- # local accel_opc 00:11:57.995 13:35:36 -- accel/accel.sh@17 -- # local accel_module 00:11:57.995 13:35:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:57.995 13:35:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:57.995 13:35:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.995 13:35:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.995 13:35:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.995 13:35:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.996 13:35:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.996 13:35:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.996 13:35:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.996 13:35:36 -- accel/accel.sh@42 -- # jq -r . 00:11:57.996 [2024-07-10 13:35:37.018187] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:57.996 [2024-07-10 13:35:37.018556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110581 ] 00:11:57.996 [2024-07-10 13:35:37.206047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.254 [2024-07-10 13:35:37.447264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.798 13:35:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:00.798 00:12:00.798 SPDK Configuration: 00:12:00.798 Core mask: 0x1 00:12:00.798 00:12:00.798 Accel Perf Configuration: 00:12:00.798 Workload Type: decompress 00:12:00.798 Transfer size: 4096 bytes 00:12:00.798 Vector count 1 00:12:00.798 Module: software 00:12:00.798 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.798 Queue depth: 32 00:12:00.798 Allocate depth: 32 00:12:00.798 # threads/core: 2 00:12:00.798 Run time: 1 seconds 00:12:00.798 Verify: Yes 00:12:00.798 00:12:00.798 Running for 1 seconds... 00:12:00.798 00:12:00.798 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:00.798 ------------------------------------------------------------------------------------ 00:12:00.798 0,1 31776/s 58 MiB/s 0 0 00:12:00.798 0,0 31712/s 58 MiB/s 0 0 00:12:00.798 ==================================================================================== 00:12:00.798 Total 63488/s 248 MiB/s 0 0' 00:12:00.798 13:35:39 -- accel/accel.sh@20 -- # IFS=: 00:12:00.798 13:35:39 -- accel/accel.sh@20 -- # read -r var val 00:12:00.798 13:35:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:00.798 13:35:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:00.798 13:35:39 -- accel/accel.sh@12 -- # build_accel_config 00:12:00.798 13:35:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:00.798 13:35:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.798 13:35:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.798 13:35:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:00.798 13:35:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:00.798 13:35:39 -- accel/accel.sh@41 -- # local IFS=, 00:12:00.798 13:35:39 -- accel/accel.sh@42 -- # jq -r . 00:12:00.798 [2024-07-10 13:35:39.919846] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:00.798 [2024-07-10 13:35:39.920497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110620 ] 00:12:00.798 [2024-07-10 13:35:40.078053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.056 [2024-07-10 13:35:40.310388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val=0x1 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val=decompress 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val=software 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@23 -- # accel_module=software 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val=32 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.314 13:35:40 -- accel/accel.sh@21 -- # val=32 00:12:01.314 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.314 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.315 13:35:40 -- accel/accel.sh@21 -- # val=2 00:12:01.315 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.315 13:35:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:01.315 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.315 13:35:40 -- accel/accel.sh@21 -- # val=Yes 00:12:01.315 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.315 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.315 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:01.315 13:35:40 -- accel/accel.sh@21 -- # val= 00:12:01.315 13:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # IFS=: 00:12:01.315 13:35:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 13:35:42 -- accel/accel.sh@21 -- # val= 00:12:03.848 13:35:42 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # IFS=: 00:12:03.848 13:35:42 -- accel/accel.sh@20 -- # read -r var val 00:12:03.848 ************************************ 00:12:03.848 END TEST accel_decomp_mthread 00:12:03.848 ************************************ 00:12:03.848 13:35:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:03.848 13:35:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:03.848 13:35:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:03.848 00:12:03.848 real 0m5.729s 00:12:03.848 user 0m5.203s 00:12:03.848 sys 0m0.374s 00:12:03.848 13:35:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.848 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:12:03.848 13:35:42 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.848 13:35:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:03.848 13:35:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.848 13:35:42 -- common/autotest_common.sh@10 -- # set +x 00:12:03.848 ************************************ 00:12:03.848 START TEST accel_deomp_full_mthread 00:12:03.848 ************************************ 00:12:03.848 13:35:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.848 13:35:42 -- accel/accel.sh@16 -- # local accel_opc 00:12:03.848 13:35:42 -- accel/accel.sh@17 -- # local accel_module 00:12:03.848 13:35:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.848 13:35:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.848 13:35:42 -- accel/accel.sh@12 -- # build_accel_config 00:12:03.848 13:35:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:03.848 13:35:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.848 13:35:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.848 13:35:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:03.848 13:35:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:03.848 13:35:42 -- accel/accel.sh@41 -- # local IFS=, 00:12:03.848 13:35:42 -- accel/accel.sh@42 -- # jq -r . 00:12:03.848 [2024-07-10 13:35:42.805448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:03.848 [2024-07-10 13:35:42.805663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110674 ] 00:12:03.848 [2024-07-10 13:35:42.970126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.110 [2024-07-10 13:35:43.223520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.639 13:35:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:06.639 00:12:06.639 SPDK Configuration: 00:12:06.639 Core mask: 0x1 00:12:06.639 00:12:06.639 Accel Perf Configuration: 00:12:06.639 Workload Type: decompress 00:12:06.639 Transfer size: 111250 bytes 00:12:06.639 Vector count 1 00:12:06.639 Module: software 00:12:06.639 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:06.639 Queue depth: 32 00:12:06.639 Allocate depth: 32 00:12:06.639 # threads/core: 2 00:12:06.639 Run time: 1 seconds 00:12:06.639 Verify: Yes 00:12:06.639 00:12:06.639 Running for 1 seconds... 00:12:06.639 00:12:06.639 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:06.639 ------------------------------------------------------------------------------------ 00:12:06.639 0,1 2048/s 84 MiB/s 0 0 00:12:06.639 0,0 1984/s 81 MiB/s 0 0 00:12:06.639 ==================================================================================== 00:12:06.639 Total 4032/s 427 MiB/s 0 0' 00:12:06.639 13:35:45 -- accel/accel.sh@20 -- # IFS=: 00:12:06.639 13:35:45 -- accel/accel.sh@20 -- # read -r var val 00:12:06.639 13:35:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:06.639 13:35:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:06.639 13:35:45 -- accel/accel.sh@12 -- # build_accel_config 00:12:06.639 13:35:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:06.639 13:35:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:06.639 13:35:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:06.639 13:35:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:06.639 13:35:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:06.639 13:35:45 -- accel/accel.sh@41 -- # local IFS=, 00:12:06.639 13:35:45 -- accel/accel.sh@42 -- # jq -r . 00:12:06.639 [2024-07-10 13:35:45.783623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:06.639 [2024-07-10 13:35:45.783863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110720 ] 00:12:06.639 [2024-07-10 13:35:45.950596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.897 [2024-07-10 13:35:46.223637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=0x1 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=decompress 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=software 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@23 -- # accel_module=software 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=32 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=32 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=2 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val=Yes 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:07.156 13:35:46 -- accel/accel.sh@21 -- # val= 00:12:07.156 13:35:46 -- accel/accel.sh@22 -- # case "$var" in 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # IFS=: 00:12:07.156 13:35:46 -- accel/accel.sh@20 -- # read -r var val 00:12:09.685 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 13:35:48 -- accel/accel.sh@21 -- # val= 00:12:09.686 13:35:48 -- accel/accel.sh@22 -- # case "$var" in 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # IFS=: 00:12:09.686 13:35:48 -- accel/accel.sh@20 -- # read -r var val 00:12:09.686 ************************************ 00:12:09.686 END TEST accel_deomp_full_mthread 00:12:09.686 ************************************ 00:12:09.686 13:35:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:09.686 13:35:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:09.686 13:35:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:09.686 00:12:09.686 real 0m6.035s 00:12:09.686 user 0m5.493s 00:12:09.686 sys 0m0.387s 00:12:09.686 13:35:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.686 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:12:09.686 13:35:48 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:09.686 13:35:48 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:09.686 13:35:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:09.686 13:35:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.686 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:12:09.686 13:35:48 -- accel/accel.sh@129 -- # build_accel_config 00:12:09.686 13:35:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:09.686 13:35:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:09.686 13:35:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:09.686 13:35:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:09.686 13:35:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:09.686 13:35:48 -- accel/accel.sh@41 -- # local IFS=, 00:12:09.686 13:35:48 -- accel/accel.sh@42 -- # jq -r . 00:12:09.686 ************************************ 00:12:09.686 START TEST accel_dif_functional_tests 00:12:09.686 ************************************ 00:12:09.686 13:35:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:09.686 [2024-07-10 13:35:48.920048] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:09.686 [2024-07-10 13:35:48.920297] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110798 ] 00:12:09.946 [2024-07-10 13:35:49.092680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:10.205 [2024-07-10 13:35:49.348444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.205 [2024-07-10 13:35:49.348569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.205 [2024-07-10 13:35:49.348575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.465 00:12:10.465 00:12:10.465 CUnit - A unit testing framework for C - Version 2.1-3 00:12:10.465 http://cunit.sourceforge.net/ 00:12:10.465 00:12:10.465 00:12:10.465 Suite: accel_dif 00:12:10.465 Test: verify: DIF generated, GUARD check ...passed 00:12:10.465 Test: verify: DIF generated, APPTAG check ...passed 00:12:10.465 Test: verify: DIF generated, REFTAG check ...passed 00:12:10.465 Test: verify: DIF not generated, GUARD check ...[2024-07-10 13:35:49.747962] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:10.465 [2024-07-10 13:35:49.748143] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:10.465 passed 00:12:10.465 Test: verify: DIF not generated, APPTAG check ...[2024-07-10 13:35:49.748303] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:10.465 [2024-07-10 13:35:49.748377] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:10.465 passed 00:12:10.465 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 13:35:49.748479] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:10.465 [2024-07-10 13:35:49.748552] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:10.465 passed 00:12:10.465 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:10.465 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-10 13:35:49.748778] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:10.465 passed 00:12:10.465 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:10.465 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:10.465 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:10.465 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-10 13:35:49.749180] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:10.465 passed 00:12:10.465 Test: generate copy: DIF generated, GUARD check ...passed 00:12:10.465 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:10.465 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:10.465 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:10.465 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:10.465 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:10.465 Test: generate copy: iovecs-len validate ...[2024-07-10 13:35:49.749840] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:10.465 passed 00:12:10.465 Test: generate copy: buffer alignment validate ...passed 00:12:10.465 00:12:10.465 Run Summary: Type Total Ran Passed Failed Inactive 00:12:10.465 suites 1 1 n/a 0 0 00:12:10.465 tests 20 20 20 0 0 00:12:10.465 asserts 204 204 204 0 n/a 00:12:10.465 00:12:10.465 Elapsed time = 0.011 seconds 00:12:12.378 ************************************ 00:12:12.378 END TEST accel_dif_functional_tests 00:12:12.378 ************************************ 00:12:12.378 00:12:12.378 real 0m2.440s 00:12:12.378 user 0m4.957s 00:12:12.378 sys 0m0.231s 00:12:12.378 13:35:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.378 13:35:51 -- common/autotest_common.sh@10 -- # set +x 00:12:12.378 ************************************ 00:12:12.378 END TEST accel 00:12:12.378 ************************************ 00:12:12.378 00:12:12.378 real 2m7.604s 00:12:12.378 user 2m22.234s 00:12:12.378 sys 0m9.831s 00:12:12.378 13:35:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.378 13:35:51 -- common/autotest_common.sh@10 -- # set +x 00:12:12.378 13:35:51 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:12.378 13:35:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:12.378 13:35:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.378 13:35:51 -- common/autotest_common.sh@10 -- # set +x 00:12:12.378 ************************************ 00:12:12.378 START TEST accel_rpc 00:12:12.378 ************************************ 00:12:12.378 13:35:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:12.378 * Looking for test storage... 00:12:12.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:12.378 13:35:51 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:12.378 13:35:51 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=110895 00:12:12.378 13:35:51 -- accel/accel_rpc.sh@15 -- # waitforlisten 110895 00:12:12.378 13:35:51 -- common/autotest_common.sh@819 -- # '[' -z 110895 ']' 00:12:12.378 13:35:51 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:12.378 13:35:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.378 13:35:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:12.378 13:35:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.378 13:35:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:12.378 13:35:51 -- common/autotest_common.sh@10 -- # set +x 00:12:12.378 [2024-07-10 13:35:51.560898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:12.378 [2024-07-10 13:35:51.561570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110895 ] 00:12:12.378 [2024-07-10 13:35:51.725022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.676 [2024-07-10 13:35:51.961761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:12.676 [2024-07-10 13:35:51.962159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.245 13:35:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:13.245 13:35:52 -- common/autotest_common.sh@852 -- # return 0 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:13.245 13:35:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:13.245 13:35:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:13.245 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:12:13.245 ************************************ 00:12:13.245 START TEST accel_assign_opcode 00:12:13.245 ************************************ 00:12:13.245 13:35:52 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:13.245 13:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.245 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:12:13.245 [2024-07-10 13:35:52.489924] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:13.245 13:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:13.245 13:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.245 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:12:13.245 [2024-07-10 13:35:52.497915] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:13.245 13:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.245 13:35:52 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:13.245 13:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.245 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:12:14.185 13:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.185 13:35:53 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:14.185 13:35:53 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:14.185 13:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.185 13:35:53 -- common/autotest_common.sh@10 -- # set +x 00:12:14.185 13:35:53 -- accel/accel_rpc.sh@42 -- # grep software 00:12:14.185 13:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.185 software 00:12:14.185 ************************************ 00:12:14.185 END TEST accel_assign_opcode 00:12:14.185 ************************************ 00:12:14.185 00:12:14.185 real 0m0.982s 00:12:14.185 user 0m0.052s 00:12:14.185 sys 0m0.009s 00:12:14.185 13:35:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.185 13:35:53 -- common/autotest_common.sh@10 -- # set +x 00:12:14.185 13:35:53 -- accel/accel_rpc.sh@55 -- # killprocess 110895 00:12:14.185 13:35:53 -- common/autotest_common.sh@926 -- # '[' -z 110895 ']' 00:12:14.185 13:35:53 -- common/autotest_common.sh@930 -- # kill -0 110895 00:12:14.185 13:35:53 -- common/autotest_common.sh@931 -- # uname 00:12:14.185 13:35:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:14.185 13:35:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110895 00:12:14.185 killing process with pid 110895 00:12:14.185 13:35:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:14.185 13:35:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:14.185 13:35:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110895' 00:12:14.185 13:35:53 -- common/autotest_common.sh@945 -- # kill 110895 00:12:14.185 13:35:53 -- common/autotest_common.sh@950 -- # wait 110895 00:12:17.477 00:12:17.477 real 0m4.756s 00:12:17.477 user 0m4.780s 00:12:17.477 sys 0m0.493s 00:12:17.477 13:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.477 13:35:56 -- common/autotest_common.sh@10 -- # set +x 00:12:17.477 ************************************ 00:12:17.477 END TEST accel_rpc 00:12:17.477 ************************************ 00:12:17.477 13:35:56 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:17.477 13:35:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:17.477 13:35:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.477 13:35:56 -- common/autotest_common.sh@10 -- # set +x 00:12:17.477 ************************************ 00:12:17.477 START TEST app_cmdline 00:12:17.477 ************************************ 00:12:17.477 13:35:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:17.477 * Looking for test storage... 00:12:17.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:17.477 13:35:56 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:17.477 13:35:56 -- app/cmdline.sh@17 -- # spdk_tgt_pid=111027 00:12:17.477 13:35:56 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:17.477 13:35:56 -- app/cmdline.sh@18 -- # waitforlisten 111027 00:12:17.477 13:35:56 -- common/autotest_common.sh@819 -- # '[' -z 111027 ']' 00:12:17.477 13:35:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.477 13:35:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:17.477 13:35:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.477 13:35:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:17.477 13:35:56 -- common/autotest_common.sh@10 -- # set +x 00:12:17.477 [2024-07-10 13:35:56.399584] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:17.477 [2024-07-10 13:35:56.399813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111027 ] 00:12:17.477 [2024-07-10 13:35:56.562879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.478 [2024-07-10 13:35:56.796467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.478 [2024-07-10 13:35:56.796763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.856 13:35:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:18.856 13:35:57 -- common/autotest_common.sh@852 -- # return 0 00:12:18.856 13:35:57 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:18.856 { 00:12:18.856 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:12:18.856 "fields": { 00:12:18.856 "major": 24, 00:12:18.856 "minor": 1, 00:12:18.856 "patch": 1, 00:12:18.856 "suffix": "-pre", 00:12:18.856 "commit": "4b94202c6" 00:12:18.856 } 00:12:18.856 } 00:12:18.856 13:35:58 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:18.856 13:35:58 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:18.856 13:35:58 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:18.856 13:35:58 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:18.856 13:35:58 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:18.856 13:35:58 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:18.856 13:35:58 -- app/cmdline.sh@26 -- # sort 00:12:18.856 13:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.856 13:35:58 -- common/autotest_common.sh@10 -- # set +x 00:12:18.856 13:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.115 13:35:58 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:19.115 13:35:58 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:19.115 13:35:58 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.115 13:35:58 -- common/autotest_common.sh@640 -- # local es=0 00:12:19.115 13:35:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.115 13:35:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.115 13:35:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:19.115 13:35:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.115 13:35:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:19.115 13:35:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.115 13:35:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:19.115 13:35:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.115 13:35:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:19.115 13:35:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.115 request: 00:12:19.115 { 00:12:19.115 "method": "env_dpdk_get_mem_stats", 00:12:19.115 "req_id": 1 00:12:19.115 } 00:12:19.115 Got JSON-RPC error response 00:12:19.115 response: 00:12:19.115 { 00:12:19.115 "code": -32601, 00:12:19.115 "message": "Method not found" 00:12:19.115 } 00:12:19.374 13:35:58 -- common/autotest_common.sh@643 -- # es=1 00:12:19.374 13:35:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:19.374 13:35:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:19.374 13:35:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:19.374 13:35:58 -- app/cmdline.sh@1 -- # killprocess 111027 00:12:19.374 13:35:58 -- common/autotest_common.sh@926 -- # '[' -z 111027 ']' 00:12:19.374 13:35:58 -- common/autotest_common.sh@930 -- # kill -0 111027 00:12:19.374 13:35:58 -- common/autotest_common.sh@931 -- # uname 00:12:19.374 13:35:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:19.374 13:35:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111027 00:12:19.374 killing process with pid 111027 00:12:19.375 13:35:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:19.375 13:35:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:19.375 13:35:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111027' 00:12:19.375 13:35:58 -- common/autotest_common.sh@945 -- # kill 111027 00:12:19.375 13:35:58 -- common/autotest_common.sh@950 -- # wait 111027 00:12:21.907 ************************************ 00:12:21.907 END TEST app_cmdline 00:12:21.907 ************************************ 00:12:21.907 00:12:21.907 real 0m4.883s 00:12:21.907 user 0m5.351s 00:12:21.907 sys 0m0.596s 00:12:21.907 13:36:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.907 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 13:36:01 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:21.907 13:36:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:21.907 13:36:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.907 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 ************************************ 00:12:21.907 START TEST version 00:12:21.907 ************************************ 00:12:21.907 13:36:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:21.907 * Looking for test storage... 00:12:21.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:21.907 13:36:01 -- app/version.sh@17 -- # get_header_version major 00:12:22.166 13:36:01 -- app/version.sh@14 -- # cut -f2 00:12:22.166 13:36:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.166 13:36:01 -- app/version.sh@14 -- # tr -d '"' 00:12:22.166 13:36:01 -- app/version.sh@17 -- # major=24 00:12:22.166 13:36:01 -- app/version.sh@18 -- # get_header_version minor 00:12:22.166 13:36:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.166 13:36:01 -- app/version.sh@14 -- # cut -f2 00:12:22.166 13:36:01 -- app/version.sh@14 -- # tr -d '"' 00:12:22.166 13:36:01 -- app/version.sh@18 -- # minor=1 00:12:22.166 13:36:01 -- app/version.sh@19 -- # get_header_version patch 00:12:22.166 13:36:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.166 13:36:01 -- app/version.sh@14 -- # cut -f2 00:12:22.166 13:36:01 -- app/version.sh@14 -- # tr -d '"' 00:12:22.166 13:36:01 -- app/version.sh@19 -- # patch=1 00:12:22.166 13:36:01 -- app/version.sh@20 -- # get_header_version suffix 00:12:22.166 13:36:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.166 13:36:01 -- app/version.sh@14 -- # cut -f2 00:12:22.166 13:36:01 -- app/version.sh@14 -- # tr -d '"' 00:12:22.166 13:36:01 -- app/version.sh@20 -- # suffix=-pre 00:12:22.166 13:36:01 -- app/version.sh@22 -- # version=24.1 00:12:22.166 13:36:01 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:22.166 13:36:01 -- app/version.sh@25 -- # version=24.1.1 00:12:22.166 13:36:01 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:22.166 13:36:01 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:22.166 13:36:01 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:22.166 13:36:01 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:22.166 13:36:01 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:22.166 ************************************ 00:12:22.166 END TEST version 00:12:22.166 ************************************ 00:12:22.166 00:12:22.166 real 0m0.194s 00:12:22.166 user 0m0.116s 00:12:22.166 sys 0m0.127s 00:12:22.166 13:36:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.166 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:22.166 13:36:01 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:22.166 13:36:01 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:22.166 13:36:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:22.166 13:36:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.166 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:22.166 ************************************ 00:12:22.166 START TEST blockdev_general 00:12:22.166 ************************************ 00:12:22.166 13:36:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:22.166 * Looking for test storage... 00:12:22.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:22.166 13:36:01 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:22.166 13:36:01 -- bdev/nbd_common.sh@6 -- # set -e 00:12:22.166 13:36:01 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:22.166 13:36:01 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:22.166 13:36:01 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:22.166 13:36:01 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:22.166 13:36:01 -- bdev/blockdev.sh@18 -- # : 00:12:22.166 13:36:01 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:22.166 13:36:01 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:22.166 13:36:01 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:22.166 13:36:01 -- bdev/blockdev.sh@672 -- # uname -s 00:12:22.166 13:36:01 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:22.166 13:36:01 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:22.166 13:36:01 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:22.166 13:36:01 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:22.166 13:36:01 -- bdev/blockdev.sh@682 -- # dek= 00:12:22.166 13:36:01 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:22.166 13:36:01 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:22.166 13:36:01 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:22.166 13:36:01 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:22.166 13:36:01 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:22.166 13:36:01 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:22.166 13:36:01 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=111244 00:12:22.166 13:36:01 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:22.166 13:36:01 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:22.166 13:36:01 -- bdev/blockdev.sh@47 -- # waitforlisten 111244 00:12:22.166 13:36:01 -- common/autotest_common.sh@819 -- # '[' -z 111244 ']' 00:12:22.166 13:36:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.166 13:36:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.166 13:36:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.166 13:36:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.166 13:36:01 -- common/autotest_common.sh@10 -- # set +x 00:12:22.425 [2024-07-10 13:36:01.619067] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:22.425 [2024-07-10 13:36:01.619347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111244 ] 00:12:22.685 [2024-07-10 13:36:01.792953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.685 [2024-07-10 13:36:02.013608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:22.685 [2024-07-10 13:36:02.013887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.254 13:36:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.254 13:36:02 -- common/autotest_common.sh@852 -- # return 0 00:12:23.254 13:36:02 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:23.254 13:36:02 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:23.254 13:36:02 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:23.254 13:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.254 13:36:02 -- common/autotest_common.sh@10 -- # set +x 00:12:24.191 [2024-07-10 13:36:03.363123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.191 [2024-07-10 13:36:03.363266] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.191 00:12:24.191 [2024-07-10 13:36:03.371089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.191 [2024-07-10 13:36:03.371172] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.191 00:12:24.191 Malloc0 00:12:24.191 Malloc1 00:12:24.191 Malloc2 00:12:24.450 Malloc3 00:12:24.450 Malloc4 00:12:24.450 Malloc5 00:12:24.450 Malloc6 00:12:24.450 Malloc7 00:12:24.450 Malloc8 00:12:24.709 Malloc9 00:12:24.709 [2024-07-10 13:36:03.843233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.709 [2024-07-10 13:36:03.843344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.709 [2024-07-10 13:36:03.843386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:24.709 [2024-07-10 13:36:03.843425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.709 [2024-07-10 13:36:03.845548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.709 [2024-07-10 13:36:03.845627] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:24.709 TestPT 00:12:24.709 13:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.709 13:36:03 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:24.709 5000+0 records in 00:12:24.709 5000+0 records out 00:12:24.709 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0262137 s, 391 MB/s 00:12:24.709 13:36:03 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:24.709 13:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.709 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.709 AIO0 00:12:24.709 13:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.709 13:36:03 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:24.709 13:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.709 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.709 13:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.709 13:36:03 -- bdev/blockdev.sh@738 -- # cat 00:12:24.709 13:36:03 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:24.709 13:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.709 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.709 13:36:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.709 13:36:03 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:24.709 13:36:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.709 13:36:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.709 13:36:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.709 13:36:04 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:24.709 13:36:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.709 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:12:24.709 13:36:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.709 13:36:04 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:24.709 13:36:04 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:24.709 13:36:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:24.709 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:12:24.709 13:36:04 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:24.969 13:36:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:24.969 13:36:04 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:24.969 13:36:04 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:24.970 13:36:04 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fd9037fc-d2bb-463c-8a02-6c6b92971223"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd9037fc-d2bb-463c-8a02-6c6b92971223",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "a07ededc-9db1-5fab-8f2c-73a837dc8416"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a07ededc-9db1-5fab-8f2c-73a837dc8416",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "392657ad-7ad8-5c49-b4e6-8cc7dea0728b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "392657ad-7ad8-5c49-b4e6-8cc7dea0728b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d611bcef-036d-5bee-99d8-2ed63bcd8c1b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d611bcef-036d-5bee-99d8-2ed63bcd8c1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5876ee49-053c-5fa1-bc77-4e89b0492567"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5876ee49-053c-5fa1-bc77-4e89b0492567",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "e4578445-8bb8-53e4-bff1-1da1a602f1b2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e4578445-8bb8-53e4-bff1-1da1a602f1b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "de5bd42a-6929-50e1-b892-ef6ec8b98c88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "de5bd42a-6929-50e1-b892-ef6ec8b98c88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "84581bc4-cbd4-5ca5-b908-f0246ea38ed7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "84581bc4-cbd4-5ca5-b908-f0246ea38ed7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "079fba75-f23e-5243-b9b5-e0936552f324"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "079fba75-f23e-5243-b9b5-e0936552f324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0621cd01-1e00-5dd4-ad19-cb8dd7100cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0621cd01-1e00-5dd4-ad19-cb8dd7100cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d908f580-9087-5ae6-a6df-c602f7670bf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d908f580-9087-5ae6-a6df-c602f7670bf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "98487151-9deb-5892-8b6a-6dc8f66d042d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "98487151-9deb-5892-8b6a-6dc8f66d042d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "669fafb8-348c-44f5-8f2b-d016bc8f9a78"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "669fafb8-348c-44f5-8f2b-d016bc8f9a78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "669fafb8-348c-44f5-8f2b-d016bc8f9a78",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6d66f85e-5da2-4be6-8387-5022ab155002",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "af4f8ed1-a1da-4bb7-a8cd-534ad6a4b911",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1e77d01b-f84f-4033-8215-eae186d18197"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1e77d01b-f84f-4033-8215-eae186d18197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1e77d01b-f84f-4033-8215-eae186d18197",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "1999e92a-7fc7-4bd7-89c3-a8e34651f87b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4644dcad-926d-4b3b-be73-b4bcc300748b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f9d82030-8350-47a1-a82c-ff815c604a4e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "5e7bed47-0ae6-45ef-9bb7-4ccb1041c6d4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ded19c44-32c2-4f1a-bf57-a3fcfdcdbe9c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ded19c44-32c2-4f1a-bf57-a3fcfdcdbe9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:24.970 13:36:04 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:24.970 13:36:04 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:24.970 13:36:04 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:24.970 13:36:04 -- bdev/blockdev.sh@752 -- # killprocess 111244 00:12:24.970 13:36:04 -- common/autotest_common.sh@926 -- # '[' -z 111244 ']' 00:12:24.970 13:36:04 -- common/autotest_common.sh@930 -- # kill -0 111244 00:12:24.970 13:36:04 -- common/autotest_common.sh@931 -- # uname 00:12:24.970 13:36:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.970 13:36:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111244 00:12:24.970 13:36:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:24.970 13:36:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:24.970 13:36:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111244' 00:12:24.970 killing process with pid 111244 00:12:24.970 13:36:04 -- common/autotest_common.sh@945 -- # kill 111244 00:12:24.970 13:36:04 -- common/autotest_common.sh@950 -- # wait 111244 00:12:29.156 13:36:07 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:29.156 13:36:07 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:29.156 13:36:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:29.156 13:36:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.156 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:12:29.156 ************************************ 00:12:29.156 START TEST bdev_hello_world 00:12:29.156 ************************************ 00:12:29.156 13:36:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:29.156 [2024-07-10 13:36:07.998365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:29.156 [2024-07-10 13:36:07.998601] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111362 ] 00:12:29.156 [2024-07-10 13:36:08.162071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.156 [2024-07-10 13:36:08.421912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.728 [2024-07-10 13:36:08.944356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.728 [2024-07-10 13:36:08.944522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.728 [2024-07-10 13:36:08.952358] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.728 [2024-07-10 13:36:08.952530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.728 [2024-07-10 13:36:08.960320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.728 [2024-07-10 13:36:08.960452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:29.728 [2024-07-10 13:36:08.960506] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:30.006 [2024-07-10 13:36:09.215989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:30.006 [2024-07-10 13:36:09.216224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.006 [2024-07-10 13:36:09.216293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:30.006 [2024-07-10 13:36:09.216356] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.006 [2024-07-10 13:36:09.218653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.006 [2024-07-10 13:36:09.218759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:30.264 [2024-07-10 13:36:09.609869] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:30.264 [2024-07-10 13:36:09.610137] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:30.264 [2024-07-10 13:36:09.610346] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:30.264 [2024-07-10 13:36:09.610544] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:30.264 [2024-07-10 13:36:09.610780] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:30.264 [2024-07-10 13:36:09.610905] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:30.264 [2024-07-10 13:36:09.611071] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:30.264 00:12:30.264 [2024-07-10 13:36:09.611208] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:33.560 ************************************ 00:12:33.560 END TEST bdev_hello_world 00:12:33.560 ************************************ 00:12:33.560 00:12:33.560 real 0m4.546s 00:12:33.560 user 0m4.082s 00:12:33.560 sys 0m0.312s 00:12:33.560 13:36:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.560 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:33.560 13:36:12 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:33.560 13:36:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:33.560 13:36:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.560 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:33.560 ************************************ 00:12:33.560 START TEST bdev_bounds 00:12:33.560 ************************************ 00:12:33.560 13:36:12 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:33.560 13:36:12 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:33.560 13:36:12 -- bdev/blockdev.sh@288 -- # bdevio_pid=111436 00:12:33.560 Process bdevio pid: 111436 00:12:33.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.560 13:36:12 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:33.560 13:36:12 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 111436' 00:12:33.560 13:36:12 -- bdev/blockdev.sh@291 -- # waitforlisten 111436 00:12:33.560 13:36:12 -- common/autotest_common.sh@819 -- # '[' -z 111436 ']' 00:12:33.560 13:36:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.560 13:36:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:33.560 13:36:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.560 13:36:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:33.560 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:12:33.560 [2024-07-10 13:36:12.586701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:33.560 [2024-07-10 13:36:12.587044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111436 ] 00:12:33.560 [2024-07-10 13:36:12.755341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.823 [2024-07-10 13:36:13.013528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.823 [2024-07-10 13:36:13.013599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.823 [2024-07-10 13:36:13.013598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.387 [2024-07-10 13:36:13.531619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:34.387 [2024-07-10 13:36:13.531817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:34.387 [2024-07-10 13:36:13.539578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:34.387 [2024-07-10 13:36:13.539715] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:34.387 [2024-07-10 13:36:13.547615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:34.387 [2024-07-10 13:36:13.547767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:34.387 [2024-07-10 13:36:13.547885] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:34.645 [2024-07-10 13:36:13.808503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:34.645 [2024-07-10 13:36:13.808720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.645 [2024-07-10 13:36:13.808817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:34.645 [2024-07-10 13:36:13.808861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.645 [2024-07-10 13:36:13.811273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.645 [2024-07-10 13:36:13.811362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:36.022 13:36:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:36.022 13:36:14 -- common/autotest_common.sh@852 -- # return 0 00:12:36.022 13:36:14 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:36.022 I/O targets: 00:12:36.022 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:36.022 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:36.022 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:36.022 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:36.022 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:36.022 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:36.022 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:36.022 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:36.022 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:36.022 00:12:36.022 00:12:36.022 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.022 http://cunit.sourceforge.net/ 00:12:36.022 00:12:36.022 00:12:36.022 Suite: bdevio tests on: AIO0 00:12:36.022 Test: blockdev write read block ...passed 00:12:36.022 Test: blockdev write zeroes read block ...passed 00:12:36.022 Test: blockdev write zeroes read no split ...passed 00:12:36.022 Test: blockdev write zeroes read split ...passed 00:12:36.022 Test: blockdev write zeroes read split partial ...passed 00:12:36.022 Test: blockdev reset ...passed 00:12:36.022 Test: blockdev write read 8 blocks ...passed 00:12:36.022 Test: blockdev write read size > 128k ...passed 00:12:36.022 Test: blockdev write read invalid size ...passed 00:12:36.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.022 Test: blockdev write read max offset ...passed 00:12:36.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.022 Test: blockdev writev readv 8 blocks ...passed 00:12:36.022 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.022 Test: blockdev writev readv block ...passed 00:12:36.022 Test: blockdev writev readv size > 128k ...passed 00:12:36.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.022 Test: blockdev comparev and writev ...passed 00:12:36.022 Test: blockdev nvme passthru rw ...passed 00:12:36.022 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.022 Test: blockdev nvme admin passthru ...passed 00:12:36.022 Test: blockdev copy ...passed 00:12:36.022 Suite: bdevio tests on: raid1 00:12:36.022 Test: blockdev write read block ...passed 00:12:36.022 Test: blockdev write zeroes read block ...passed 00:12:36.022 Test: blockdev write zeroes read no split ...passed 00:12:36.022 Test: blockdev write zeroes read split ...passed 00:12:36.022 Test: blockdev write zeroes read split partial ...passed 00:12:36.022 Test: blockdev reset ...passed 00:12:36.022 Test: blockdev write read 8 blocks ...passed 00:12:36.022 Test: blockdev write read size > 128k ...passed 00:12:36.022 Test: blockdev write read invalid size ...passed 00:12:36.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.022 Test: blockdev write read max offset ...passed 00:12:36.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.022 Test: blockdev writev readv 8 blocks ...passed 00:12:36.022 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.022 Test: blockdev writev readv block ...passed 00:12:36.022 Test: blockdev writev readv size > 128k ...passed 00:12:36.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.022 Test: blockdev comparev and writev ...passed 00:12:36.022 Test: blockdev nvme passthru rw ...passed 00:12:36.022 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.022 Test: blockdev nvme admin passthru ...passed 00:12:36.022 Test: blockdev copy ...passed 00:12:36.022 Suite: bdevio tests on: concat0 00:12:36.022 Test: blockdev write read block ...passed 00:12:36.022 Test: blockdev write zeroes read block ...passed 00:12:36.022 Test: blockdev write zeroes read no split ...passed 00:12:36.022 Test: blockdev write zeroes read split ...passed 00:12:36.022 Test: blockdev write zeroes read split partial ...passed 00:12:36.022 Test: blockdev reset ...passed 00:12:36.022 Test: blockdev write read 8 blocks ...passed 00:12:36.022 Test: blockdev write read size > 128k ...passed 00:12:36.022 Test: blockdev write read invalid size ...passed 00:12:36.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.022 Test: blockdev write read max offset ...passed 00:12:36.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.022 Test: blockdev writev readv 8 blocks ...passed 00:12:36.022 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.022 Test: blockdev writev readv block ...passed 00:12:36.022 Test: blockdev writev readv size > 128k ...passed 00:12:36.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.022 Test: blockdev comparev and writev ...passed 00:12:36.022 Test: blockdev nvme passthru rw ...passed 00:12:36.022 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.022 Test: blockdev nvme admin passthru ...passed 00:12:36.022 Test: blockdev copy ...passed 00:12:36.022 Suite: bdevio tests on: raid0 00:12:36.022 Test: blockdev write read block ...passed 00:12:36.022 Test: blockdev write zeroes read block ...passed 00:12:36.022 Test: blockdev write zeroes read no split ...passed 00:12:36.281 Test: blockdev write zeroes read split ...passed 00:12:36.281 Test: blockdev write zeroes read split partial ...passed 00:12:36.281 Test: blockdev reset ...passed 00:12:36.281 Test: blockdev write read 8 blocks ...passed 00:12:36.281 Test: blockdev write read size > 128k ...passed 00:12:36.281 Test: blockdev write read invalid size ...passed 00:12:36.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.281 Test: blockdev write read max offset ...passed 00:12:36.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.281 Test: blockdev writev readv 8 blocks ...passed 00:12:36.281 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.281 Test: blockdev writev readv block ...passed 00:12:36.281 Test: blockdev writev readv size > 128k ...passed 00:12:36.281 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.281 Test: blockdev comparev and writev ...passed 00:12:36.281 Test: blockdev nvme passthru rw ...passed 00:12:36.281 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.281 Test: blockdev nvme admin passthru ...passed 00:12:36.281 Test: blockdev copy ...passed 00:12:36.281 Suite: bdevio tests on: TestPT 00:12:36.281 Test: blockdev write read block ...passed 00:12:36.281 Test: blockdev write zeroes read block ...passed 00:12:36.281 Test: blockdev write zeroes read no split ...passed 00:12:36.281 Test: blockdev write zeroes read split ...passed 00:12:36.281 Test: blockdev write zeroes read split partial ...passed 00:12:36.281 Test: blockdev reset ...passed 00:12:36.281 Test: blockdev write read 8 blocks ...passed 00:12:36.281 Test: blockdev write read size > 128k ...passed 00:12:36.281 Test: blockdev write read invalid size ...passed 00:12:36.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.281 Test: blockdev write read max offset ...passed 00:12:36.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.281 Test: blockdev writev readv 8 blocks ...passed 00:12:36.282 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.282 Test: blockdev writev readv block ...passed 00:12:36.282 Test: blockdev writev readv size > 128k ...passed 00:12:36.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.282 Test: blockdev comparev and writev ...passed 00:12:36.282 Test: blockdev nvme passthru rw ...passed 00:12:36.282 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.282 Test: blockdev nvme admin passthru ...passed 00:12:36.282 Test: blockdev copy ...passed 00:12:36.282 Suite: bdevio tests on: Malloc2p7 00:12:36.282 Test: blockdev write read block ...passed 00:12:36.282 Test: blockdev write zeroes read block ...passed 00:12:36.282 Test: blockdev write zeroes read no split ...passed 00:12:36.282 Test: blockdev write zeroes read split ...passed 00:12:36.282 Test: blockdev write zeroes read split partial ...passed 00:12:36.282 Test: blockdev reset ...passed 00:12:36.282 Test: blockdev write read 8 blocks ...passed 00:12:36.282 Test: blockdev write read size > 128k ...passed 00:12:36.282 Test: blockdev write read invalid size ...passed 00:12:36.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.282 Test: blockdev write read max offset ...passed 00:12:36.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.282 Test: blockdev writev readv 8 blocks ...passed 00:12:36.282 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.282 Test: blockdev writev readv block ...passed 00:12:36.282 Test: blockdev writev readv size > 128k ...passed 00:12:36.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.282 Test: blockdev comparev and writev ...passed 00:12:36.282 Test: blockdev nvme passthru rw ...passed 00:12:36.282 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.282 Test: blockdev nvme admin passthru ...passed 00:12:36.282 Test: blockdev copy ...passed 00:12:36.282 Suite: bdevio tests on: Malloc2p6 00:12:36.282 Test: blockdev write read block ...passed 00:12:36.282 Test: blockdev write zeroes read block ...passed 00:12:36.282 Test: blockdev write zeroes read no split ...passed 00:12:36.282 Test: blockdev write zeroes read split ...passed 00:12:36.540 Test: blockdev write zeroes read split partial ...passed 00:12:36.540 Test: blockdev reset ...passed 00:12:36.540 Test: blockdev write read 8 blocks ...passed 00:12:36.540 Test: blockdev write read size > 128k ...passed 00:12:36.540 Test: blockdev write read invalid size ...passed 00:12:36.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.540 Test: blockdev write read max offset ...passed 00:12:36.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.540 Test: blockdev writev readv 8 blocks ...passed 00:12:36.540 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.540 Test: blockdev writev readv block ...passed 00:12:36.540 Test: blockdev writev readv size > 128k ...passed 00:12:36.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.540 Test: blockdev comparev and writev ...passed 00:12:36.540 Test: blockdev nvme passthru rw ...passed 00:12:36.540 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.540 Test: blockdev nvme admin passthru ...passed 00:12:36.540 Test: blockdev copy ...passed 00:12:36.540 Suite: bdevio tests on: Malloc2p5 00:12:36.540 Test: blockdev write read block ...passed 00:12:36.540 Test: blockdev write zeroes read block ...passed 00:12:36.540 Test: blockdev write zeroes read no split ...passed 00:12:36.540 Test: blockdev write zeroes read split ...passed 00:12:36.540 Test: blockdev write zeroes read split partial ...passed 00:12:36.540 Test: blockdev reset ...passed 00:12:36.540 Test: blockdev write read 8 blocks ...passed 00:12:36.540 Test: blockdev write read size > 128k ...passed 00:12:36.540 Test: blockdev write read invalid size ...passed 00:12:36.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.540 Test: blockdev write read max offset ...passed 00:12:36.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.540 Test: blockdev writev readv 8 blocks ...passed 00:12:36.540 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.540 Test: blockdev writev readv block ...passed 00:12:36.540 Test: blockdev writev readv size > 128k ...passed 00:12:36.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.540 Test: blockdev comparev and writev ...passed 00:12:36.540 Test: blockdev nvme passthru rw ...passed 00:12:36.540 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.540 Test: blockdev nvme admin passthru ...passed 00:12:36.540 Test: blockdev copy ...passed 00:12:36.540 Suite: bdevio tests on: Malloc2p4 00:12:36.540 Test: blockdev write read block ...passed 00:12:36.540 Test: blockdev write zeroes read block ...passed 00:12:36.540 Test: blockdev write zeroes read no split ...passed 00:12:36.540 Test: blockdev write zeroes read split ...passed 00:12:36.540 Test: blockdev write zeroes read split partial ...passed 00:12:36.540 Test: blockdev reset ...passed 00:12:36.540 Test: blockdev write read 8 blocks ...passed 00:12:36.540 Test: blockdev write read size > 128k ...passed 00:12:36.540 Test: blockdev write read invalid size ...passed 00:12:36.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.540 Test: blockdev write read max offset ...passed 00:12:36.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.540 Test: blockdev writev readv 8 blocks ...passed 00:12:36.540 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.540 Test: blockdev writev readv block ...passed 00:12:36.540 Test: blockdev writev readv size > 128k ...passed 00:12:36.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.540 Test: blockdev comparev and writev ...passed 00:12:36.540 Test: blockdev nvme passthru rw ...passed 00:12:36.540 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.540 Test: blockdev nvme admin passthru ...passed 00:12:36.540 Test: blockdev copy ...passed 00:12:36.540 Suite: bdevio tests on: Malloc2p3 00:12:36.540 Test: blockdev write read block ...passed 00:12:36.540 Test: blockdev write zeroes read block ...passed 00:12:36.540 Test: blockdev write zeroes read no split ...passed 00:12:36.540 Test: blockdev write zeroes read split ...passed 00:12:36.798 Test: blockdev write zeroes read split partial ...passed 00:12:36.798 Test: blockdev reset ...passed 00:12:36.798 Test: blockdev write read 8 blocks ...passed 00:12:36.798 Test: blockdev write read size > 128k ...passed 00:12:36.798 Test: blockdev write read invalid size ...passed 00:12:36.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.798 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.798 Test: blockdev write read max offset ...passed 00:12:36.798 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.798 Test: blockdev writev readv 8 blocks ...passed 00:12:36.798 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.798 Test: blockdev writev readv block ...passed 00:12:36.798 Test: blockdev writev readv size > 128k ...passed 00:12:36.798 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.798 Test: blockdev comparev and writev ...passed 00:12:36.798 Test: blockdev nvme passthru rw ...passed 00:12:36.798 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.798 Test: blockdev nvme admin passthru ...passed 00:12:36.798 Test: blockdev copy ...passed 00:12:36.798 Suite: bdevio tests on: Malloc2p2 00:12:36.798 Test: blockdev write read block ...passed 00:12:36.798 Test: blockdev write zeroes read block ...passed 00:12:36.798 Test: blockdev write zeroes read no split ...passed 00:12:36.798 Test: blockdev write zeroes read split ...passed 00:12:36.798 Test: blockdev write zeroes read split partial ...passed 00:12:36.798 Test: blockdev reset ...passed 00:12:36.798 Test: blockdev write read 8 blocks ...passed 00:12:36.798 Test: blockdev write read size > 128k ...passed 00:12:36.798 Test: blockdev write read invalid size ...passed 00:12:36.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.799 Test: blockdev write read max offset ...passed 00:12:36.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.799 Test: blockdev writev readv 8 blocks ...passed 00:12:36.799 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.799 Test: blockdev writev readv block ...passed 00:12:36.799 Test: blockdev writev readv size > 128k ...passed 00:12:36.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.799 Test: blockdev comparev and writev ...passed 00:12:36.799 Test: blockdev nvme passthru rw ...passed 00:12:36.799 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.799 Test: blockdev nvme admin passthru ...passed 00:12:36.799 Test: blockdev copy ...passed 00:12:36.799 Suite: bdevio tests on: Malloc2p1 00:12:36.799 Test: blockdev write read block ...passed 00:12:36.799 Test: blockdev write zeroes read block ...passed 00:12:36.799 Test: blockdev write zeroes read no split ...passed 00:12:36.799 Test: blockdev write zeroes read split ...passed 00:12:36.799 Test: blockdev write zeroes read split partial ...passed 00:12:36.799 Test: blockdev reset ...passed 00:12:36.799 Test: blockdev write read 8 blocks ...passed 00:12:36.799 Test: blockdev write read size > 128k ...passed 00:12:36.799 Test: blockdev write read invalid size ...passed 00:12:36.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.799 Test: blockdev write read max offset ...passed 00:12:36.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.799 Test: blockdev writev readv 8 blocks ...passed 00:12:36.799 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.799 Test: blockdev writev readv block ...passed 00:12:36.799 Test: blockdev writev readv size > 128k ...passed 00:12:36.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.799 Test: blockdev comparev and writev ...passed 00:12:36.799 Test: blockdev nvme passthru rw ...passed 00:12:36.799 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.799 Test: blockdev nvme admin passthru ...passed 00:12:36.799 Test: blockdev copy ...passed 00:12:36.799 Suite: bdevio tests on: Malloc2p0 00:12:36.799 Test: blockdev write read block ...passed 00:12:36.799 Test: blockdev write zeroes read block ...passed 00:12:36.799 Test: blockdev write zeroes read no split ...passed 00:12:37.057 Test: blockdev write zeroes read split ...passed 00:12:37.057 Test: blockdev write zeroes read split partial ...passed 00:12:37.057 Test: blockdev reset ...passed 00:12:37.057 Test: blockdev write read 8 blocks ...passed 00:12:37.057 Test: blockdev write read size > 128k ...passed 00:12:37.057 Test: blockdev write read invalid size ...passed 00:12:37.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.057 Test: blockdev write read max offset ...passed 00:12:37.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.057 Test: blockdev writev readv 8 blocks ...passed 00:12:37.057 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.057 Test: blockdev writev readv block ...passed 00:12:37.057 Test: blockdev writev readv size > 128k ...passed 00:12:37.057 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.057 Test: blockdev comparev and writev ...passed 00:12:37.057 Test: blockdev nvme passthru rw ...passed 00:12:37.057 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.057 Test: blockdev nvme admin passthru ...passed 00:12:37.057 Test: blockdev copy ...passed 00:12:37.057 Suite: bdevio tests on: Malloc1p1 00:12:37.057 Test: blockdev write read block ...passed 00:12:37.057 Test: blockdev write zeroes read block ...passed 00:12:37.057 Test: blockdev write zeroes read no split ...passed 00:12:37.057 Test: blockdev write zeroes read split ...passed 00:12:37.057 Test: blockdev write zeroes read split partial ...passed 00:12:37.057 Test: blockdev reset ...passed 00:12:37.057 Test: blockdev write read 8 blocks ...passed 00:12:37.057 Test: blockdev write read size > 128k ...passed 00:12:37.057 Test: blockdev write read invalid size ...passed 00:12:37.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.057 Test: blockdev write read max offset ...passed 00:12:37.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.057 Test: blockdev writev readv 8 blocks ...passed 00:12:37.057 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.057 Test: blockdev writev readv block ...passed 00:12:37.057 Test: blockdev writev readv size > 128k ...passed 00:12:37.057 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.057 Test: blockdev comparev and writev ...passed 00:12:37.057 Test: blockdev nvme passthru rw ...passed 00:12:37.057 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.057 Test: blockdev nvme admin passthru ...passed 00:12:37.057 Test: blockdev copy ...passed 00:12:37.057 Suite: bdevio tests on: Malloc1p0 00:12:37.057 Test: blockdev write read block ...passed 00:12:37.057 Test: blockdev write zeroes read block ...passed 00:12:37.057 Test: blockdev write zeroes read no split ...passed 00:12:37.057 Test: blockdev write zeroes read split ...passed 00:12:37.057 Test: blockdev write zeroes read split partial ...passed 00:12:37.057 Test: blockdev reset ...passed 00:12:37.057 Test: blockdev write read 8 blocks ...passed 00:12:37.057 Test: blockdev write read size > 128k ...passed 00:12:37.057 Test: blockdev write read invalid size ...passed 00:12:37.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.057 Test: blockdev write read max offset ...passed 00:12:37.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.057 Test: blockdev writev readv 8 blocks ...passed 00:12:37.057 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.057 Test: blockdev writev readv block ...passed 00:12:37.057 Test: blockdev writev readv size > 128k ...passed 00:12:37.057 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.057 Test: blockdev comparev and writev ...passed 00:12:37.057 Test: blockdev nvme passthru rw ...passed 00:12:37.057 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.057 Test: blockdev nvme admin passthru ...passed 00:12:37.057 Test: blockdev copy ...passed 00:12:37.057 Suite: bdevio tests on: Malloc0 00:12:37.057 Test: blockdev write read block ...passed 00:12:37.057 Test: blockdev write zeroes read block ...passed 00:12:37.057 Test: blockdev write zeroes read no split ...passed 00:12:37.315 Test: blockdev write zeroes read split ...passed 00:12:37.315 Test: blockdev write zeroes read split partial ...passed 00:12:37.315 Test: blockdev reset ...passed 00:12:37.315 Test: blockdev write read 8 blocks ...passed 00:12:37.315 Test: blockdev write read size > 128k ...passed 00:12:37.315 Test: blockdev write read invalid size ...passed 00:12:37.315 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.315 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.315 Test: blockdev write read max offset ...passed 00:12:37.315 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.315 Test: blockdev writev readv 8 blocks ...passed 00:12:37.315 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.315 Test: blockdev writev readv block ...passed 00:12:37.315 Test: blockdev writev readv size > 128k ...passed 00:12:37.315 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.315 Test: blockdev comparev and writev ...passed 00:12:37.315 Test: blockdev nvme passthru rw ...passed 00:12:37.315 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.315 Test: blockdev nvme admin passthru ...passed 00:12:37.315 Test: blockdev copy ...passed 00:12:37.315 00:12:37.315 Run Summary: Type Total Ran Passed Failed Inactive 00:12:37.315 suites 16 16 n/a 0 0 00:12:37.315 tests 368 368 368 0 0 00:12:37.315 asserts 2224 2224 2224 0 n/a 00:12:37.315 00:12:37.315 Elapsed time = 4.182 seconds 00:12:37.315 0 00:12:37.315 13:36:16 -- bdev/blockdev.sh@293 -- # killprocess 111436 00:12:37.315 13:36:16 -- common/autotest_common.sh@926 -- # '[' -z 111436 ']' 00:12:37.315 13:36:16 -- common/autotest_common.sh@930 -- # kill -0 111436 00:12:37.315 13:36:16 -- common/autotest_common.sh@931 -- # uname 00:12:37.315 13:36:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:37.315 13:36:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111436 00:12:37.315 killing process with pid 111436 00:12:37.315 13:36:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:37.315 13:36:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:37.315 13:36:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111436' 00:12:37.315 13:36:16 -- common/autotest_common.sh@945 -- # kill 111436 00:12:37.315 13:36:16 -- common/autotest_common.sh@950 -- # wait 111436 00:12:39.844 ************************************ 00:12:39.844 END TEST bdev_bounds 00:12:39.844 ************************************ 00:12:39.844 13:36:19 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:39.844 00:12:39.844 real 0m6.606s 00:12:39.844 user 0m17.804s 00:12:39.844 sys 0m0.599s 00:12:39.844 13:36:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.844 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 13:36:19 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:39.844 13:36:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:39.844 13:36:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.844 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 ************************************ 00:12:39.844 START TEST bdev_nbd 00:12:39.844 ************************************ 00:12:39.844 13:36:19 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:39.844 13:36:19 -- bdev/blockdev.sh@298 -- # uname -s 00:12:39.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:39.844 13:36:19 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:39.844 13:36:19 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.844 13:36:19 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:39.844 13:36:19 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:12:39.844 13:36:19 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:39.844 13:36:19 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:39.844 13:36:19 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:39.844 13:36:19 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:12:39.844 13:36:19 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:39.844 13:36:19 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:39.844 13:36:19 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:12:39.844 13:36:19 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:39.844 13:36:19 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:12:39.844 13:36:19 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:39.844 13:36:19 -- bdev/blockdev.sh@316 -- # nbd_pid=111578 00:12:39.844 13:36:19 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:39.844 13:36:19 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:39.844 13:36:19 -- bdev/blockdev.sh@318 -- # waitforlisten 111578 /var/tmp/spdk-nbd.sock 00:12:39.844 13:36:19 -- common/autotest_common.sh@819 -- # '[' -z 111578 ']' 00:12:39.844 13:36:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:39.844 13:36:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:39.844 13:36:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:39.844 13:36:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:39.844 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:12:40.101 [2024-07-10 13:36:19.230555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:40.101 [2024-07-10 13:36:19.230797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.101 [2024-07-10 13:36:19.381962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.359 [2024-07-10 13:36:19.629812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.940 [2024-07-10 13:36:20.074015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:40.940 [2024-07-10 13:36:20.074224] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:40.940 [2024-07-10 13:36:20.081965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:40.940 [2024-07-10 13:36:20.082117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:40.940 [2024-07-10 13:36:20.089958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:40.940 [2024-07-10 13:36:20.090076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:40.940 [2024-07-10 13:36:20.090125] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:41.267 [2024-07-10 13:36:20.328887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.267 [2024-07-10 13:36:20.329102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.267 [2024-07-10 13:36:20.329181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:41.267 [2024-07-10 13:36:20.329229] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.267 [2024-07-10 13:36:20.331598] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.267 [2024-07-10 13:36:20.331757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:41.835 13:36:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:41.835 13:36:20 -- common/autotest_common.sh@852 -- # return 0 00:12:41.835 13:36:20 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@24 -- # local i 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.835 13:36:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:42.094 13:36:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:42.094 13:36:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:42.094 13:36:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:42.094 13:36:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:42.094 13:36:21 -- common/autotest_common.sh@857 -- # local i 00:12:42.094 13:36:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.094 13:36:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.094 13:36:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:42.094 13:36:21 -- common/autotest_common.sh@861 -- # break 00:12:42.094 13:36:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.094 13:36:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.094 13:36:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.094 1+0 records in 00:12:42.094 1+0 records out 00:12:42.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268206 s, 15.3 MB/s 00:12:42.094 13:36:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.094 13:36:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.094 13:36:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.094 13:36:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.094 13:36:21 -- common/autotest_common.sh@877 -- # return 0 00:12:42.094 13:36:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.094 13:36:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.094 13:36:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:42.352 13:36:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:42.352 13:36:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:42.352 13:36:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:42.352 13:36:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:42.352 13:36:21 -- common/autotest_common.sh@857 -- # local i 00:12:42.352 13:36:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.352 13:36:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.352 13:36:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:42.352 13:36:21 -- common/autotest_common.sh@861 -- # break 00:12:42.352 13:36:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.352 13:36:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.352 13:36:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.352 1+0 records in 00:12:42.352 1+0 records out 00:12:42.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283548 s, 14.4 MB/s 00:12:42.352 13:36:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.352 13:36:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.352 13:36:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.352 13:36:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.352 13:36:21 -- common/autotest_common.sh@877 -- # return 0 00:12:42.352 13:36:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.352 13:36:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.352 13:36:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:42.610 13:36:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:42.610 13:36:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:42.610 13:36:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:42.610 13:36:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:42.610 13:36:21 -- common/autotest_common.sh@857 -- # local i 00:12:42.610 13:36:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.610 13:36:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.610 13:36:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:42.610 13:36:21 -- common/autotest_common.sh@861 -- # break 00:12:42.610 13:36:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.610 13:36:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.610 13:36:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.610 1+0 records in 00:12:42.610 1+0 records out 00:12:42.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281057 s, 14.6 MB/s 00:12:42.610 13:36:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.610 13:36:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.610 13:36:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.610 13:36:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.610 13:36:21 -- common/autotest_common.sh@877 -- # return 0 00:12:42.610 13:36:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.610 13:36:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.610 13:36:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:42.869 13:36:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:42.869 13:36:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:42.869 13:36:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:42.869 13:36:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:42.869 13:36:22 -- common/autotest_common.sh@857 -- # local i 00:12:42.869 13:36:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.869 13:36:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.869 13:36:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:42.869 13:36:22 -- common/autotest_common.sh@861 -- # break 00:12:42.869 13:36:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.869 13:36:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.869 13:36:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.869 1+0 records in 00:12:42.869 1+0 records out 00:12:42.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044498 s, 9.2 MB/s 00:12:42.869 13:36:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.869 13:36:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.869 13:36:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.869 13:36:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.869 13:36:22 -- common/autotest_common.sh@877 -- # return 0 00:12:42.869 13:36:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.869 13:36:22 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.869 13:36:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:43.127 13:36:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:43.127 13:36:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:43.385 13:36:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:43.385 13:36:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:43.385 13:36:22 -- common/autotest_common.sh@857 -- # local i 00:12:43.385 13:36:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.385 13:36:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.385 13:36:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:43.385 13:36:22 -- common/autotest_common.sh@861 -- # break 00:12:43.385 13:36:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.385 13:36:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.385 13:36:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.385 1+0 records in 00:12:43.385 1+0 records out 00:12:43.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429561 s, 9.5 MB/s 00:12:43.385 13:36:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.385 13:36:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.385 13:36:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.385 13:36:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.385 13:36:22 -- common/autotest_common.sh@877 -- # return 0 00:12:43.385 13:36:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.385 13:36:22 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.385 13:36:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:43.644 13:36:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:43.644 13:36:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:43.644 13:36:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:43.644 13:36:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:43.644 13:36:22 -- common/autotest_common.sh@857 -- # local i 00:12:43.644 13:36:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.644 13:36:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.644 13:36:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:43.644 13:36:22 -- common/autotest_common.sh@861 -- # break 00:12:43.644 13:36:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.644 13:36:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.644 13:36:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.644 1+0 records in 00:12:43.644 1+0 records out 00:12:43.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504423 s, 8.1 MB/s 00:12:43.644 13:36:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.644 13:36:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.644 13:36:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.644 13:36:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.644 13:36:22 -- common/autotest_common.sh@877 -- # return 0 00:12:43.644 13:36:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.644 13:36:22 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.644 13:36:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:43.902 13:36:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:43.902 13:36:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:43.902 13:36:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:43.902 13:36:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:43.902 13:36:23 -- common/autotest_common.sh@857 -- # local i 00:12:43.902 13:36:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.902 13:36:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.902 13:36:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:43.902 13:36:23 -- common/autotest_common.sh@861 -- # break 00:12:43.902 13:36:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.902 13:36:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.902 13:36:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.902 1+0 records in 00:12:43.902 1+0 records out 00:12:43.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414942 s, 9.9 MB/s 00:12:43.903 13:36:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.903 13:36:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.903 13:36:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.903 13:36:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.903 13:36:23 -- common/autotest_common.sh@877 -- # return 0 00:12:43.903 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.903 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.903 13:36:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:44.161 13:36:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:44.161 13:36:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:44.161 13:36:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:44.161 13:36:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:44.161 13:36:23 -- common/autotest_common.sh@857 -- # local i 00:12:44.161 13:36:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.161 13:36:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.161 13:36:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:44.161 13:36:23 -- common/autotest_common.sh@861 -- # break 00:12:44.161 13:36:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.161 13:36:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.161 13:36:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.161 1+0 records in 00:12:44.161 1+0 records out 00:12:44.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432669 s, 9.5 MB/s 00:12:44.161 13:36:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.161 13:36:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.161 13:36:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.161 13:36:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.161 13:36:23 -- common/autotest_common.sh@877 -- # return 0 00:12:44.161 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.161 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.161 13:36:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:44.420 13:36:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:44.420 13:36:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:44.420 13:36:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:44.420 13:36:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:44.420 13:36:23 -- common/autotest_common.sh@857 -- # local i 00:12:44.420 13:36:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.420 13:36:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.420 13:36:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:44.420 13:36:23 -- common/autotest_common.sh@861 -- # break 00:12:44.420 13:36:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.420 13:36:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.420 13:36:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.420 1+0 records in 00:12:44.420 1+0 records out 00:12:44.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570811 s, 7.2 MB/s 00:12:44.420 13:36:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.420 13:36:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.420 13:36:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.420 13:36:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.420 13:36:23 -- common/autotest_common.sh@877 -- # return 0 00:12:44.420 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.420 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.420 13:36:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:44.678 13:36:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:44.678 13:36:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:44.678 13:36:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:44.678 13:36:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:44.679 13:36:23 -- common/autotest_common.sh@857 -- # local i 00:12:44.679 13:36:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.679 13:36:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.679 13:36:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:44.679 13:36:23 -- common/autotest_common.sh@861 -- # break 00:12:44.679 13:36:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.679 13:36:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.679 13:36:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.679 1+0 records in 00:12:44.679 1+0 records out 00:12:44.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056071 s, 7.3 MB/s 00:12:44.679 13:36:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.679 13:36:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.679 13:36:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.679 13:36:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.679 13:36:23 -- common/autotest_common.sh@877 -- # return 0 00:12:44.679 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.679 13:36:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.679 13:36:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:44.936 13:36:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:44.937 13:36:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:44.937 13:36:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:44.937 13:36:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:44.937 13:36:24 -- common/autotest_common.sh@857 -- # local i 00:12:44.937 13:36:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.937 13:36:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.937 13:36:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:44.937 13:36:24 -- common/autotest_common.sh@861 -- # break 00:12:44.937 13:36:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.937 13:36:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.937 13:36:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.937 1+0 records in 00:12:44.937 1+0 records out 00:12:44.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432936 s, 9.5 MB/s 00:12:44.937 13:36:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.937 13:36:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.937 13:36:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.937 13:36:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.937 13:36:24 -- common/autotest_common.sh@877 -- # return 0 00:12:44.937 13:36:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.937 13:36:24 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.937 13:36:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:45.195 13:36:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:45.195 13:36:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:45.195 13:36:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:45.195 13:36:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:45.195 13:36:24 -- common/autotest_common.sh@857 -- # local i 00:12:45.195 13:36:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:45.195 13:36:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:45.195 13:36:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:45.195 13:36:24 -- common/autotest_common.sh@861 -- # break 00:12:45.195 13:36:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:45.195 13:36:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:45.195 13:36:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.195 1+0 records in 00:12:45.195 1+0 records out 00:12:45.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507626 s, 8.1 MB/s 00:12:45.195 13:36:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.195 13:36:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:45.196 13:36:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.196 13:36:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:45.196 13:36:24 -- common/autotest_common.sh@877 -- # return 0 00:12:45.196 13:36:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.196 13:36:24 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.196 13:36:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:45.762 13:36:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:45.762 13:36:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:45.762 13:36:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:45.762 13:36:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:45.762 13:36:24 -- common/autotest_common.sh@857 -- # local i 00:12:45.762 13:36:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:45.762 13:36:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:45.762 13:36:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:45.762 13:36:24 -- common/autotest_common.sh@861 -- # break 00:12:45.762 13:36:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:45.762 13:36:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:45.762 13:36:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.762 1+0 records in 00:12:45.762 1+0 records out 00:12:45.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473314 s, 8.7 MB/s 00:12:45.762 13:36:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.762 13:36:24 -- common/autotest_common.sh@874 -- # size=4096 00:12:45.762 13:36:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.762 13:36:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:45.762 13:36:24 -- common/autotest_common.sh@877 -- # return 0 00:12:45.762 13:36:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.762 13:36:24 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.762 13:36:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:46.021 13:36:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:46.021 13:36:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:46.021 13:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:46.021 13:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:46.021 13:36:25 -- common/autotest_common.sh@857 -- # local i 00:12:46.021 13:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.021 13:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.021 13:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:46.021 13:36:25 -- common/autotest_common.sh@861 -- # break 00:12:46.021 13:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.021 13:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.021 13:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.021 1+0 records in 00:12:46.021 1+0 records out 00:12:46.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579234 s, 7.1 MB/s 00:12:46.021 13:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.021 13:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.021 13:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.021 13:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.021 13:36:25 -- common/autotest_common.sh@877 -- # return 0 00:12:46.021 13:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.021 13:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:46.021 13:36:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:46.307 13:36:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:46.307 13:36:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:46.307 13:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:46.307 13:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:46.307 13:36:25 -- common/autotest_common.sh@857 -- # local i 00:12:46.307 13:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.307 13:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.307 13:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:46.307 13:36:25 -- common/autotest_common.sh@861 -- # break 00:12:46.307 13:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.307 13:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.307 13:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.307 1+0 records in 00:12:46.307 1+0 records out 00:12:46.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477251 s, 8.6 MB/s 00:12:46.307 13:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.307 13:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.307 13:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.307 13:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.307 13:36:25 -- common/autotest_common.sh@877 -- # return 0 00:12:46.307 13:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.307 13:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:46.307 13:36:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:46.566 13:36:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:46.566 13:36:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:46.566 13:36:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:46.566 13:36:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:46.566 13:36:25 -- common/autotest_common.sh@857 -- # local i 00:12:46.566 13:36:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.566 13:36:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.566 13:36:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:46.566 13:36:25 -- common/autotest_common.sh@861 -- # break 00:12:46.566 13:36:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.566 13:36:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.566 13:36:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.566 1+0 records in 00:12:46.566 1+0 records out 00:12:46.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591295 s, 6.9 MB/s 00:12:46.566 13:36:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.566 13:36:25 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.566 13:36:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.566 13:36:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.566 13:36:25 -- common/autotest_common.sh@877 -- # return 0 00:12:46.566 13:36:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:46.566 13:36:25 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:46.566 13:36:25 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.824 13:36:26 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:46.824 { 00:12:46.824 "nbd_device": "/dev/nbd0", 00:12:46.824 "bdev_name": "Malloc0" 00:12:46.824 }, 00:12:46.824 { 00:12:46.825 "nbd_device": "/dev/nbd1", 00:12:46.825 "bdev_name": "Malloc1p0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd2", 00:12:46.825 "bdev_name": "Malloc1p1" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd3", 00:12:46.825 "bdev_name": "Malloc2p0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd4", 00:12:46.825 "bdev_name": "Malloc2p1" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd5", 00:12:46.825 "bdev_name": "Malloc2p2" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd6", 00:12:46.825 "bdev_name": "Malloc2p3" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd7", 00:12:46.825 "bdev_name": "Malloc2p4" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd8", 00:12:46.825 "bdev_name": "Malloc2p5" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd9", 00:12:46.825 "bdev_name": "Malloc2p6" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd10", 00:12:46.825 "bdev_name": "Malloc2p7" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd11", 00:12:46.825 "bdev_name": "TestPT" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd12", 00:12:46.825 "bdev_name": "raid0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd13", 00:12:46.825 "bdev_name": "concat0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd14", 00:12:46.825 "bdev_name": "raid1" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd15", 00:12:46.825 "bdev_name": "AIO0" 00:12:46.825 } 00:12:46.825 ]' 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd0", 00:12:46.825 "bdev_name": "Malloc0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd1", 00:12:46.825 "bdev_name": "Malloc1p0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd2", 00:12:46.825 "bdev_name": "Malloc1p1" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd3", 00:12:46.825 "bdev_name": "Malloc2p0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd4", 00:12:46.825 "bdev_name": "Malloc2p1" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd5", 00:12:46.825 "bdev_name": "Malloc2p2" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd6", 00:12:46.825 "bdev_name": "Malloc2p3" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd7", 00:12:46.825 "bdev_name": "Malloc2p4" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd8", 00:12:46.825 "bdev_name": "Malloc2p5" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd9", 00:12:46.825 "bdev_name": "Malloc2p6" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd10", 00:12:46.825 "bdev_name": "Malloc2p7" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd11", 00:12:46.825 "bdev_name": "TestPT" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd12", 00:12:46.825 "bdev_name": "raid0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd13", 00:12:46.825 "bdev_name": "concat0" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd14", 00:12:46.825 "bdev_name": "raid1" 00:12:46.825 }, 00:12:46.825 { 00:12:46.825 "nbd_device": "/dev/nbd15", 00:12:46.825 "bdev_name": "AIO0" 00:12:46.825 } 00:12:46.825 ]' 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@51 -- # local i 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.825 13:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@41 -- # break 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.127 13:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@41 -- # break 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.400 13:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@41 -- # break 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.659 13:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@41 -- # break 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.659 13:36:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@41 -- # break 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@41 -- # break 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.224 13:36:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@41 -- # break 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.481 13:36:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@41 -- # break 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.738 13:36:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@41 -- # break 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.996 13:36:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@41 -- # break 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.253 13:36:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@41 -- # break 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.511 13:36:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@41 -- # break 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.770 13:36:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@41 -- # break 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.028 13:36:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:50.285 13:36:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@41 -- # break 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.286 13:36:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@41 -- # break 00:12:50.544 13:36:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.545 13:36:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.545 13:36:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:51.111 13:36:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@41 -- # break 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:51.112 13:36:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@65 -- # true 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@65 -- # count=0 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@122 -- # count=0 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@127 -- # return 0 00:12:51.373 13:36:30 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@12 -- # local i 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.373 13:36:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:51.633 /dev/nbd0 00:12:51.633 13:36:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.633 13:36:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.633 13:36:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:51.633 13:36:30 -- common/autotest_common.sh@857 -- # local i 00:12:51.633 13:36:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:51.633 13:36:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:51.633 13:36:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:51.633 13:36:30 -- common/autotest_common.sh@861 -- # break 00:12:51.633 13:36:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:51.633 13:36:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:51.633 13:36:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.633 1+0 records in 00:12:51.633 1+0 records out 00:12:51.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359589 s, 11.4 MB/s 00:12:51.633 13:36:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.633 13:36:30 -- common/autotest_common.sh@874 -- # size=4096 00:12:51.633 13:36:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.633 13:36:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:51.633 13:36:30 -- common/autotest_common.sh@877 -- # return 0 00:12:51.633 13:36:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.633 13:36:30 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.633 13:36:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:51.890 /dev/nbd1 00:12:51.890 13:36:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.890 13:36:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.890 13:36:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:51.890 13:36:31 -- common/autotest_common.sh@857 -- # local i 00:12:51.891 13:36:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:51.891 13:36:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:51.891 13:36:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:51.891 13:36:31 -- common/autotest_common.sh@861 -- # break 00:12:51.891 13:36:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:51.891 13:36:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:51.891 13:36:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.891 1+0 records in 00:12:51.891 1+0 records out 00:12:51.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586836 s, 7.0 MB/s 00:12:51.891 13:36:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.891 13:36:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:51.891 13:36:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.891 13:36:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:51.891 13:36:31 -- common/autotest_common.sh@877 -- # return 0 00:12:51.891 13:36:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.891 13:36:31 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.891 13:36:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:52.148 /dev/nbd10 00:12:52.148 13:36:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:52.148 13:36:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:52.148 13:36:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:52.148 13:36:31 -- common/autotest_common.sh@857 -- # local i 00:12:52.148 13:36:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:52.148 13:36:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:52.148 13:36:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:52.148 13:36:31 -- common/autotest_common.sh@861 -- # break 00:12:52.148 13:36:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:52.148 13:36:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:52.148 13:36:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.148 1+0 records in 00:12:52.148 1+0 records out 00:12:52.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425948 s, 9.6 MB/s 00:12:52.148 13:36:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.148 13:36:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:52.148 13:36:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.148 13:36:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:52.148 13:36:31 -- common/autotest_common.sh@877 -- # return 0 00:12:52.148 13:36:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.148 13:36:31 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.148 13:36:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:52.406 /dev/nbd11 00:12:52.406 13:36:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:52.406 13:36:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:52.406 13:36:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:52.406 13:36:31 -- common/autotest_common.sh@857 -- # local i 00:12:52.406 13:36:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:52.406 13:36:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:52.406 13:36:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:52.406 13:36:31 -- common/autotest_common.sh@861 -- # break 00:12:52.406 13:36:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:52.406 13:36:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:52.406 13:36:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.406 1+0 records in 00:12:52.406 1+0 records out 00:12:52.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397276 s, 10.3 MB/s 00:12:52.406 13:36:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.406 13:36:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:52.406 13:36:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.406 13:36:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:52.406 13:36:31 -- common/autotest_common.sh@877 -- # return 0 00:12:52.406 13:36:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.406 13:36:31 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.406 13:36:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:52.665 /dev/nbd12 00:12:52.665 13:36:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:52.665 13:36:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:52.665 13:36:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:52.665 13:36:32 -- common/autotest_common.sh@857 -- # local i 00:12:52.665 13:36:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:52.665 13:36:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:52.665 13:36:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:52.665 13:36:32 -- common/autotest_common.sh@861 -- # break 00:12:52.665 13:36:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:52.665 13:36:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:52.665 13:36:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.665 1+0 records in 00:12:52.665 1+0 records out 00:12:52.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407787 s, 10.0 MB/s 00:12:52.951 13:36:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.951 13:36:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:52.951 13:36:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.951 13:36:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:52.951 13:36:32 -- common/autotest_common.sh@877 -- # return 0 00:12:52.951 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.951 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.951 13:36:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:53.209 /dev/nbd13 00:12:53.209 13:36:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:53.209 13:36:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:53.209 13:36:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:53.209 13:36:32 -- common/autotest_common.sh@857 -- # local i 00:12:53.209 13:36:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:53.209 13:36:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:53.209 13:36:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:53.209 13:36:32 -- common/autotest_common.sh@861 -- # break 00:12:53.209 13:36:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:53.209 13:36:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:53.209 13:36:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.209 1+0 records in 00:12:53.209 1+0 records out 00:12:53.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507921 s, 8.1 MB/s 00:12:53.209 13:36:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.209 13:36:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:53.209 13:36:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.210 13:36:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:53.210 13:36:32 -- common/autotest_common.sh@877 -- # return 0 00:12:53.210 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.210 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.210 13:36:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:53.468 /dev/nbd14 00:12:53.468 13:36:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:53.468 13:36:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:53.468 13:36:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:53.468 13:36:32 -- common/autotest_common.sh@857 -- # local i 00:12:53.468 13:36:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:53.468 13:36:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:53.468 13:36:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:53.468 13:36:32 -- common/autotest_common.sh@861 -- # break 00:12:53.468 13:36:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:53.468 13:36:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:53.468 13:36:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.468 1+0 records in 00:12:53.468 1+0 records out 00:12:53.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512224 s, 8.0 MB/s 00:12:53.468 13:36:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.468 13:36:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:53.468 13:36:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.468 13:36:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:53.468 13:36:32 -- common/autotest_common.sh@877 -- # return 0 00:12:53.468 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.468 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.468 13:36:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:53.763 /dev/nbd15 00:12:53.763 13:36:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:53.763 13:36:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:53.763 13:36:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:53.763 13:36:32 -- common/autotest_common.sh@857 -- # local i 00:12:53.763 13:36:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:53.763 13:36:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:53.763 13:36:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:53.763 13:36:32 -- common/autotest_common.sh@861 -- # break 00:12:53.763 13:36:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:53.763 13:36:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:53.763 13:36:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.763 1+0 records in 00:12:53.763 1+0 records out 00:12:53.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482644 s, 8.5 MB/s 00:12:53.763 13:36:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.763 13:36:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:53.763 13:36:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.763 13:36:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:53.763 13:36:32 -- common/autotest_common.sh@877 -- # return 0 00:12:53.763 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.763 13:36:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.763 13:36:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:54.022 /dev/nbd2 00:12:54.022 13:36:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:54.022 13:36:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:54.022 13:36:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:54.022 13:36:33 -- common/autotest_common.sh@857 -- # local i 00:12:54.022 13:36:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:54.022 13:36:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:54.022 13:36:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:54.022 13:36:33 -- common/autotest_common.sh@861 -- # break 00:12:54.022 13:36:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:54.022 13:36:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:54.022 13:36:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.022 1+0 records in 00:12:54.022 1+0 records out 00:12:54.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049855 s, 8.2 MB/s 00:12:54.022 13:36:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.022 13:36:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:54.022 13:36:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.022 13:36:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:54.022 13:36:33 -- common/autotest_common.sh@877 -- # return 0 00:12:54.022 13:36:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.022 13:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.022 13:36:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:54.282 /dev/nbd3 00:12:54.282 13:36:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:54.282 13:36:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:54.282 13:36:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:54.282 13:36:33 -- common/autotest_common.sh@857 -- # local i 00:12:54.282 13:36:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:54.282 13:36:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:54.282 13:36:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:54.282 13:36:33 -- common/autotest_common.sh@861 -- # break 00:12:54.282 13:36:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:54.282 13:36:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:54.282 13:36:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.282 1+0 records in 00:12:54.282 1+0 records out 00:12:54.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732513 s, 5.6 MB/s 00:12:54.282 13:36:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.282 13:36:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:54.282 13:36:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.282 13:36:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:54.282 13:36:33 -- common/autotest_common.sh@877 -- # return 0 00:12:54.282 13:36:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.282 13:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.282 13:36:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:54.541 /dev/nbd4 00:12:54.799 13:36:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:54.799 13:36:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:54.799 13:36:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:54.799 13:36:33 -- common/autotest_common.sh@857 -- # local i 00:12:54.799 13:36:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:54.799 13:36:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:54.799 13:36:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:54.799 13:36:33 -- common/autotest_common.sh@861 -- # break 00:12:54.799 13:36:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:54.799 13:36:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:54.799 13:36:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.799 1+0 records in 00:12:54.799 1+0 records out 00:12:54.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567181 s, 7.2 MB/s 00:12:54.800 13:36:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.800 13:36:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:54.800 13:36:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.800 13:36:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:54.800 13:36:33 -- common/autotest_common.sh@877 -- # return 0 00:12:54.800 13:36:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.800 13:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.800 13:36:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:55.058 /dev/nbd5 00:12:55.058 13:36:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:55.058 13:36:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:55.058 13:36:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:55.058 13:36:34 -- common/autotest_common.sh@857 -- # local i 00:12:55.058 13:36:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:55.058 13:36:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:55.058 13:36:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:55.058 13:36:34 -- common/autotest_common.sh@861 -- # break 00:12:55.058 13:36:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:55.058 13:36:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:55.058 13:36:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.058 1+0 records in 00:12:55.058 1+0 records out 00:12:55.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767316 s, 5.3 MB/s 00:12:55.058 13:36:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.058 13:36:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:55.058 13:36:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.058 13:36:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:55.058 13:36:34 -- common/autotest_common.sh@877 -- # return 0 00:12:55.058 13:36:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.058 13:36:34 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.058 13:36:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:55.317 /dev/nbd6 00:12:55.317 13:36:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:55.317 13:36:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:55.317 13:36:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:55.317 13:36:34 -- common/autotest_common.sh@857 -- # local i 00:12:55.317 13:36:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:55.317 13:36:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:55.317 13:36:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:55.317 13:36:34 -- common/autotest_common.sh@861 -- # break 00:12:55.317 13:36:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:55.317 13:36:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:55.317 13:36:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.317 1+0 records in 00:12:55.317 1+0 records out 00:12:55.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708452 s, 5.8 MB/s 00:12:55.317 13:36:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.317 13:36:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:55.318 13:36:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.318 13:36:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:55.318 13:36:34 -- common/autotest_common.sh@877 -- # return 0 00:12:55.318 13:36:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.318 13:36:34 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.318 13:36:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:55.578 /dev/nbd7 00:12:55.578 13:36:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:55.578 13:36:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:55.578 13:36:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:55.578 13:36:34 -- common/autotest_common.sh@857 -- # local i 00:12:55.578 13:36:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:55.578 13:36:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:55.578 13:36:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:55.578 13:36:34 -- common/autotest_common.sh@861 -- # break 00:12:55.578 13:36:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:55.578 13:36:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:55.578 13:36:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.578 1+0 records in 00:12:55.578 1+0 records out 00:12:55.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445623 s, 9.2 MB/s 00:12:55.578 13:36:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.578 13:36:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:55.578 13:36:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.578 13:36:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:55.578 13:36:34 -- common/autotest_common.sh@877 -- # return 0 00:12:55.578 13:36:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.578 13:36:34 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.578 13:36:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:55.836 /dev/nbd8 00:12:55.836 13:36:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:55.836 13:36:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:55.836 13:36:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:55.836 13:36:35 -- common/autotest_common.sh@857 -- # local i 00:12:55.836 13:36:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:55.836 13:36:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:55.836 13:36:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:55.836 13:36:35 -- common/autotest_common.sh@861 -- # break 00:12:55.836 13:36:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:55.836 13:36:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:55.836 13:36:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.836 1+0 records in 00:12:55.836 1+0 records out 00:12:55.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471402 s, 8.7 MB/s 00:12:55.836 13:36:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.836 13:36:35 -- common/autotest_common.sh@874 -- # size=4096 00:12:55.836 13:36:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.836 13:36:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:55.836 13:36:35 -- common/autotest_common.sh@877 -- # return 0 00:12:55.836 13:36:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.836 13:36:35 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.836 13:36:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:56.094 /dev/nbd9 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:56.094 13:36:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:56.094 13:36:35 -- common/autotest_common.sh@857 -- # local i 00:12:56.094 13:36:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:56.094 13:36:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:56.094 13:36:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:56.094 13:36:35 -- common/autotest_common.sh@861 -- # break 00:12:56.094 13:36:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:56.094 13:36:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:56.094 13:36:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.094 1+0 records in 00:12:56.094 1+0 records out 00:12:56.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000954518 s, 4.3 MB/s 00:12:56.094 13:36:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.094 13:36:35 -- common/autotest_common.sh@874 -- # size=4096 00:12:56.094 13:36:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.094 13:36:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:56.094 13:36:35 -- common/autotest_common.sh@877 -- # return 0 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.094 13:36:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.353 13:36:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd0", 00:12:56.353 "bdev_name": "Malloc0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd1", 00:12:56.353 "bdev_name": "Malloc1p0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd10", 00:12:56.353 "bdev_name": "Malloc1p1" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd11", 00:12:56.353 "bdev_name": "Malloc2p0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd12", 00:12:56.353 "bdev_name": "Malloc2p1" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd13", 00:12:56.353 "bdev_name": "Malloc2p2" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd14", 00:12:56.353 "bdev_name": "Malloc2p3" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd15", 00:12:56.353 "bdev_name": "Malloc2p4" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd2", 00:12:56.353 "bdev_name": "Malloc2p5" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd3", 00:12:56.353 "bdev_name": "Malloc2p6" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd4", 00:12:56.353 "bdev_name": "Malloc2p7" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd5", 00:12:56.353 "bdev_name": "TestPT" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd6", 00:12:56.353 "bdev_name": "raid0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd7", 00:12:56.353 "bdev_name": "concat0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd8", 00:12:56.353 "bdev_name": "raid1" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd9", 00:12:56.353 "bdev_name": "AIO0" 00:12:56.353 } 00:12:56.353 ]' 00:12:56.353 13:36:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd0", 00:12:56.353 "bdev_name": "Malloc0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd1", 00:12:56.353 "bdev_name": "Malloc1p0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd10", 00:12:56.353 "bdev_name": "Malloc1p1" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd11", 00:12:56.353 "bdev_name": "Malloc2p0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd12", 00:12:56.353 "bdev_name": "Malloc2p1" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd13", 00:12:56.353 "bdev_name": "Malloc2p2" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd14", 00:12:56.353 "bdev_name": "Malloc2p3" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd15", 00:12:56.353 "bdev_name": "Malloc2p4" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd2", 00:12:56.353 "bdev_name": "Malloc2p5" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd3", 00:12:56.353 "bdev_name": "Malloc2p6" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd4", 00:12:56.353 "bdev_name": "Malloc2p7" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd5", 00:12:56.353 "bdev_name": "TestPT" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd6", 00:12:56.353 "bdev_name": "raid0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd7", 00:12:56.353 "bdev_name": "concat0" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd8", 00:12:56.353 "bdev_name": "raid1" 00:12:56.353 }, 00:12:56.353 { 00:12:56.353 "nbd_device": "/dev/nbd9", 00:12:56.353 "bdev_name": "AIO0" 00:12:56.353 } 00:12:56.353 ]' 00:12:56.353 13:36:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.353 13:36:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:56.353 /dev/nbd1 00:12:56.353 /dev/nbd10 00:12:56.353 /dev/nbd11 00:12:56.353 /dev/nbd12 00:12:56.353 /dev/nbd13 00:12:56.354 /dev/nbd14 00:12:56.354 /dev/nbd15 00:12:56.354 /dev/nbd2 00:12:56.354 /dev/nbd3 00:12:56.354 /dev/nbd4 00:12:56.354 /dev/nbd5 00:12:56.354 /dev/nbd6 00:12:56.354 /dev/nbd7 00:12:56.354 /dev/nbd8 00:12:56.354 /dev/nbd9' 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:56.354 /dev/nbd1 00:12:56.354 /dev/nbd10 00:12:56.354 /dev/nbd11 00:12:56.354 /dev/nbd12 00:12:56.354 /dev/nbd13 00:12:56.354 /dev/nbd14 00:12:56.354 /dev/nbd15 00:12:56.354 /dev/nbd2 00:12:56.354 /dev/nbd3 00:12:56.354 /dev/nbd4 00:12:56.354 /dev/nbd5 00:12:56.354 /dev/nbd6 00:12:56.354 /dev/nbd7 00:12:56.354 /dev/nbd8 00:12:56.354 /dev/nbd9' 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@65 -- # count=16 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@95 -- # count=16 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:56.354 256+0 records in 00:12:56.354 256+0 records out 00:12:56.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645542 s, 162 MB/s 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.354 13:36:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:56.612 256+0 records in 00:12:56.612 256+0 records out 00:12:56.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0848089 s, 12.4 MB/s 00:12:56.612 13:36:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.612 13:36:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:56.612 256+0 records in 00:12:56.612 256+0 records out 00:12:56.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0993465 s, 10.6 MB/s 00:12:56.612 13:36:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.612 13:36:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:56.612 256+0 records in 00:12:56.612 256+0 records out 00:12:56.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0885686 s, 11.8 MB/s 00:12:56.612 13:36:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.612 13:36:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:56.870 256+0 records in 00:12:56.870 256+0 records out 00:12:56.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0979814 s, 10.7 MB/s 00:12:56.870 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.870 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:56.870 256+0 records in 00:12:56.870 256+0 records out 00:12:56.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.087201 s, 12.0 MB/s 00:12:56.870 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.870 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:56.870 256+0 records in 00:12:56.870 256+0 records out 00:12:56.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0857155 s, 12.2 MB/s 00:12:56.870 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.870 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:57.128 256+0 records in 00:12:57.128 256+0 records out 00:12:57.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0856787 s, 12.2 MB/s 00:12:57.128 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.128 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:57.128 256+0 records in 00:12:57.128 256+0 records out 00:12:57.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0906514 s, 11.6 MB/s 00:12:57.128 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.128 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:57.128 256+0 records in 00:12:57.128 256+0 records out 00:12:57.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0896497 s, 11.7 MB/s 00:12:57.128 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.128 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:57.387 256+0 records in 00:12:57.387 256+0 records out 00:12:57.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0893869 s, 11.7 MB/s 00:12:57.387 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.387 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:57.387 256+0 records in 00:12:57.387 256+0 records out 00:12:57.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0862294 s, 12.2 MB/s 00:12:57.387 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.387 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:57.644 256+0 records in 00:12:57.644 256+0 records out 00:12:57.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0916706 s, 11.4 MB/s 00:12:57.644 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.644 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:57.644 256+0 records in 00:12:57.644 256+0 records out 00:12:57.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.093564 s, 11.2 MB/s 00:12:57.644 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.644 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:57.644 256+0 records in 00:12:57.645 256+0 records out 00:12:57.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0896791 s, 11.7 MB/s 00:12:57.645 13:36:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.645 13:36:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:57.901 256+0 records in 00:12:57.901 256+0 records out 00:12:57.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0953369 s, 11.0 MB/s 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:57.901 256+0 records in 00:12:57.901 256+0 records out 00:12:57.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132585 s, 7.9 MB/s 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.901 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@51 -- # local i 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.159 13:36:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@41 -- # break 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.417 13:36:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@41 -- # break 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.675 13:36:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@41 -- # break 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.935 13:36:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@41 -- # break 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.193 13:36:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@41 -- # break 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.451 13:36:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@41 -- # break 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.709 13:36:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.710 13:36:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@41 -- # break 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.967 13:36:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@41 -- # break 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.276 13:36:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@41 -- # break 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.555 13:36:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@41 -- # break 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.813 13:36:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@41 -- # break 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.071 13:36:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@41 -- # break 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.328 13:36:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@41 -- # break 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.586 13:36:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@41 -- # break 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.845 13:36:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@41 -- # break 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.103 13:36:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@41 -- # break 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.361 13:36:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@65 -- # true 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@65 -- # count=0 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@104 -- # count=0 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@109 -- # return 0 00:13:02.621 13:36:41 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:02.621 13:36:41 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:02.878 malloc_lvol_verify 00:13:02.878 13:36:42 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:03.137 c9b7df81-a96c-4977-a276-a3b873e72e6e 00:13:03.137 13:36:42 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:03.396 a1ae1389-9ad2-47bf-9889-bfb66db345a9 00:13:03.396 13:36:42 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:03.654 /dev/nbd0 00:13:03.654 13:36:42 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:03.654 mke2fs 1.45.5 (07-Jan-2020) 00:13:03.654 00:13:03.654 Filesystem too small for a journal 00:13:03.654 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:03.654 00:13:03.654 Allocating group tables: 0/1 done 00:13:03.654 Writing inode tables: 0/1 done 00:13:03.654 Writing superblocks and filesystem accounting information: 0/1 done 00:13:03.654 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@51 -- # local i 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.655 13:36:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@41 -- # break 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:03.913 13:36:43 -- bdev/nbd_common.sh@147 -- # return 0 00:13:03.913 13:36:43 -- bdev/blockdev.sh@324 -- # killprocess 111578 00:13:03.913 13:36:43 -- common/autotest_common.sh@926 -- # '[' -z 111578 ']' 00:13:03.913 13:36:43 -- common/autotest_common.sh@930 -- # kill -0 111578 00:13:03.913 13:36:43 -- common/autotest_common.sh@931 -- # uname 00:13:03.913 13:36:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:03.913 13:36:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111578 00:13:03.913 killing process with pid 111578 00:13:03.913 13:36:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:03.913 13:36:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:03.913 13:36:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111578' 00:13:03.913 13:36:43 -- common/autotest_common.sh@945 -- # kill 111578 00:13:03.913 13:36:43 -- common/autotest_common.sh@950 -- # wait 111578 00:13:07.197 ************************************ 00:13:07.197 END TEST bdev_nbd 00:13:07.197 ************************************ 00:13:07.197 13:36:46 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:13:07.197 00:13:07.197 real 0m26.945s 00:13:07.197 user 0m37.198s 00:13:07.197 sys 0m9.531s 00:13:07.197 13:36:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.197 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.197 13:36:46 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:13:07.197 13:36:46 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:13:07.197 13:36:46 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:13:07.197 13:36:46 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.198 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.198 ************************************ 00:13:07.198 START TEST bdev_fio 00:13:07.198 ************************************ 00:13:07.198 13:36:46 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@329 -- # local env_context 00:13:07.198 13:36:46 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:07.198 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:07.198 13:36:46 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:07.198 13:36:46 -- bdev/blockdev.sh@337 -- # echo '' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:13:07.198 13:36:46 -- bdev/blockdev.sh@337 -- # env_context= 00:13:07.198 13:36:46 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:07.198 13:36:46 -- common/autotest_common.sh@1260 -- # local workload=verify 00:13:07.198 13:36:46 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:13:07.198 13:36:46 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:07.198 13:36:46 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:07.198 13:36:46 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:07.198 13:36:46 -- common/autotest_common.sh@1280 -- # cat 00:13:07.198 13:36:46 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1293 -- # cat 00:13:07.198 13:36:46 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:13:07.198 13:36:46 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:07.198 13:36:46 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:13:07.198 13:36:46 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:13:07.198 13:36:46 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:13:07.198 13:36:46 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:07.198 13:36:46 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.198 13:36:46 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.198 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.198 ************************************ 00:13:07.198 START TEST bdev_fio_rw_verify 00:13:07.198 ************************************ 00:13:07.198 13:36:46 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.198 13:36:46 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.198 13:36:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:07.198 13:36:46 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:07.198 13:36:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:07.198 13:36:46 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.198 13:36:46 -- common/autotest_common.sh@1320 -- # shift 00:13:07.198 13:36:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:07.198 13:36:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.198 13:36:46 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.198 13:36:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:07.198 13:36:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:07.198 13:36:46 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:07.198 13:36:46 -- common/autotest_common.sh@1326 -- # break 00:13:07.198 13:36:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:07.198 13:36:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.198 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.198 fio-3.35 00:13:07.198 Starting 16 threads 00:13:19.449 00:13:19.449 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=112810: Wed Jul 10 13:36:58 2024 00:13:19.449 read: IOPS=67.2k, BW=263MiB/s (275MB/s)(2627MiB/10003msec) 00:13:19.449 slat (usec): min=2, max=36018, avg=38.42, stdev=396.45 00:13:19.449 clat (usec): min=8, max=36262, avg=315.81, stdev=1214.75 00:13:19.449 lat (usec): min=24, max=36292, avg=354.23, stdev=1277.85 00:13:19.449 clat percentiles (usec): 00:13:19.449 | 50.000th=[ 184], 99.000th=[ 1172], 99.900th=[16319], 99.990th=[24249], 00:13:19.449 | 99.999th=[32375] 00:13:19.449 write: IOPS=109k, BW=424MiB/s (445MB/s)(4188MiB/9876msec); 0 zone resets 00:13:19.449 slat (usec): min=8, max=57323, avg=75.29, stdev=646.35 00:13:19.449 clat (usec): min=10, max=57640, avg=422.47, stdev=1494.79 00:13:19.449 lat (usec): min=38, max=57681, avg=497.76, stdev=1628.78 00:13:19.449 clat percentiles (usec): 00:13:19.449 | 50.000th=[ 243], 99.000th=[ 8094], 99.900th=[16712], 99.990th=[28443], 00:13:19.449 | 99.999th=[51119] 00:13:19.449 bw ( KiB/s): min=238497, max=679040, per=98.43%, avg=427356.05, stdev=7641.96, samples=304 00:13:19.449 iops : min=59624, max=169759, avg=106838.79, stdev=1910.49, samples=304 00:13:19.449 lat (usec) : 10=0.01%, 20=0.01%, 50=0.52%, 100=10.17%, 250=49.79% 00:13:19.449 lat (usec) : 500=33.61%, 750=2.92%, 1000=0.84% 00:13:19.449 lat (msec) : 2=1.00%, 4=0.10%, 10=0.29%, 20=0.69%, 50=0.06% 00:13:19.449 lat (msec) : 100=0.01% 00:13:19.449 cpu : usr=57.73%, sys=1.90%, ctx=242131, majf=0, minf=74355 00:13:19.449 IO depths : 1=11.5%, 2=24.4%, 4=51.3%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:19.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.449 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.449 issued rwts: total=672390,1072019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:19.449 00:13:19.449 Run status group 0 (all jobs): 00:13:19.449 READ: bw=263MiB/s (275MB/s), 263MiB/s-263MiB/s (275MB/s-275MB/s), io=2627MiB (2754MB), run=10003-10003msec 00:13:19.449 WRITE: bw=424MiB/s (445MB/s), 424MiB/s-424MiB/s (445MB/s-445MB/s), io=4188MiB (4391MB), run=9876-9876msec 00:13:22.000 ----------------------------------------------------- 00:13:22.000 Suppressions used: 00:13:22.000 count bytes template 00:13:22.000 16 140 /usr/src/fio/parse.c 00:13:22.000 11649 1118304 /usr/src/fio/iolog.c 00:13:22.000 2 596 libcrypto.so 00:13:22.000 ----------------------------------------------------- 00:13:22.000 00:13:22.000 ************************************ 00:13:22.000 END TEST bdev_fio_rw_verify 00:13:22.000 ************************************ 00:13:22.000 00:13:22.000 real 0m14.912s 00:13:22.000 user 1m38.601s 00:13:22.000 sys 0m4.057s 00:13:22.000 13:37:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.000 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:22.000 13:37:01 -- bdev/blockdev.sh@348 -- # rm -f 00:13:22.000 13:37:01 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.000 13:37:01 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:22.000 13:37:01 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.000 13:37:01 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:22.000 13:37:01 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:22.000 13:37:01 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:22.000 13:37:01 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:22.000 13:37:01 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:22.000 13:37:01 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:22.000 13:37:01 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:22.000 13:37:01 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.000 13:37:01 -- common/autotest_common.sh@1280 -- # cat 00:13:22.000 13:37:01 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:22.000 13:37:01 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:22.000 13:37:01 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:22.000 13:37:01 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:22.001 13:37:01 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fd9037fc-d2bb-463c-8a02-6c6b92971223"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd9037fc-d2bb-463c-8a02-6c6b92971223",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "a07ededc-9db1-5fab-8f2c-73a837dc8416"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a07ededc-9db1-5fab-8f2c-73a837dc8416",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "392657ad-7ad8-5c49-b4e6-8cc7dea0728b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "392657ad-7ad8-5c49-b4e6-8cc7dea0728b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d611bcef-036d-5bee-99d8-2ed63bcd8c1b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d611bcef-036d-5bee-99d8-2ed63bcd8c1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5876ee49-053c-5fa1-bc77-4e89b0492567"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5876ee49-053c-5fa1-bc77-4e89b0492567",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "e4578445-8bb8-53e4-bff1-1da1a602f1b2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e4578445-8bb8-53e4-bff1-1da1a602f1b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "de5bd42a-6929-50e1-b892-ef6ec8b98c88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "de5bd42a-6929-50e1-b892-ef6ec8b98c88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "84581bc4-cbd4-5ca5-b908-f0246ea38ed7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "84581bc4-cbd4-5ca5-b908-f0246ea38ed7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "079fba75-f23e-5243-b9b5-e0936552f324"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "079fba75-f23e-5243-b9b5-e0936552f324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0621cd01-1e00-5dd4-ad19-cb8dd7100cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0621cd01-1e00-5dd4-ad19-cb8dd7100cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d908f580-9087-5ae6-a6df-c602f7670bf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d908f580-9087-5ae6-a6df-c602f7670bf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "98487151-9deb-5892-8b6a-6dc8f66d042d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "98487151-9deb-5892-8b6a-6dc8f66d042d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "669fafb8-348c-44f5-8f2b-d016bc8f9a78"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "669fafb8-348c-44f5-8f2b-d016bc8f9a78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "669fafb8-348c-44f5-8f2b-d016bc8f9a78",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6d66f85e-5da2-4be6-8387-5022ab155002",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "af4f8ed1-a1da-4bb7-a8cd-534ad6a4b911",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1e77d01b-f84f-4033-8215-eae186d18197"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1e77d01b-f84f-4033-8215-eae186d18197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1e77d01b-f84f-4033-8215-eae186d18197",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "1999e92a-7fc7-4bd7-89c3-a8e34651f87b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4644dcad-926d-4b3b-be73-b4bcc300748b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f9d82030-8350-47a1-a82c-ff815c604a4e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "5e7bed47-0ae6-45ef-9bb7-4ccb1041c6d4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ded19c44-32c2-4f1a-bf57-a3fcfdcdbe9c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ded19c44-32c2-4f1a-bf57-a3fcfdcdbe9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:22.001 13:37:01 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:22.001 Malloc1p0 00:13:22.001 Malloc1p1 00:13:22.001 Malloc2p0 00:13:22.001 Malloc2p1 00:13:22.001 Malloc2p2 00:13:22.001 Malloc2p3 00:13:22.001 Malloc2p4 00:13:22.001 Malloc2p5 00:13:22.001 Malloc2p6 00:13:22.001 Malloc2p7 00:13:22.001 TestPT 00:13:22.001 raid0 00:13:22.001 concat0 ]] 00:13:22.001 13:37:01 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fd9037fc-d2bb-463c-8a02-6c6b92971223"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fd9037fc-d2bb-463c-8a02-6c6b92971223",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "a07ededc-9db1-5fab-8f2c-73a837dc8416"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a07ededc-9db1-5fab-8f2c-73a837dc8416",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "392657ad-7ad8-5c49-b4e6-8cc7dea0728b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "392657ad-7ad8-5c49-b4e6-8cc7dea0728b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d611bcef-036d-5bee-99d8-2ed63bcd8c1b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d611bcef-036d-5bee-99d8-2ed63bcd8c1b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "5876ee49-053c-5fa1-bc77-4e89b0492567"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5876ee49-053c-5fa1-bc77-4e89b0492567",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "e4578445-8bb8-53e4-bff1-1da1a602f1b2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e4578445-8bb8-53e4-bff1-1da1a602f1b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "de5bd42a-6929-50e1-b892-ef6ec8b98c88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "de5bd42a-6929-50e1-b892-ef6ec8b98c88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "84581bc4-cbd4-5ca5-b908-f0246ea38ed7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "84581bc4-cbd4-5ca5-b908-f0246ea38ed7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "079fba75-f23e-5243-b9b5-e0936552f324"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "079fba75-f23e-5243-b9b5-e0936552f324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0621cd01-1e00-5dd4-ad19-cb8dd7100cf5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0621cd01-1e00-5dd4-ad19-cb8dd7100cf5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d908f580-9087-5ae6-a6df-c602f7670bf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d908f580-9087-5ae6-a6df-c602f7670bf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "98487151-9deb-5892-8b6a-6dc8f66d042d"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "98487151-9deb-5892-8b6a-6dc8f66d042d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "669fafb8-348c-44f5-8f2b-d016bc8f9a78"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "669fafb8-348c-44f5-8f2b-d016bc8f9a78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "669fafb8-348c-44f5-8f2b-d016bc8f9a78",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "6d66f85e-5da2-4be6-8387-5022ab155002",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "af4f8ed1-a1da-4bb7-a8cd-534ad6a4b911",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "1e77d01b-f84f-4033-8215-eae186d18197"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1e77d01b-f84f-4033-8215-eae186d18197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1e77d01b-f84f-4033-8215-eae186d18197",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "1999e92a-7fc7-4bd7-89c3-a8e34651f87b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "4644dcad-926d-4b3b-be73-b4bcc300748b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa7f97ad-9750-4627-8dac-31e3fb1bcbf4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f9d82030-8350-47a1-a82c-ff815c604a4e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "5e7bed47-0ae6-45ef-9bb7-4ccb1041c6d4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ded19c44-32c2-4f1a-bf57-a3fcfdcdbe9c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ded19c44-32c2-4f1a-bf57-a3fcfdcdbe9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:22.002 13:37:01 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:22.002 13:37:01 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:22.002 13:37:01 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:22.002 13:37:01 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.002 13:37:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:22.002 13:37:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:22.002 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:22.002 ************************************ 00:13:22.002 START TEST bdev_fio_trim 00:13:22.002 ************************************ 00:13:22.002 13:37:01 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.002 13:37:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.002 13:37:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:22.002 13:37:01 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:22.002 13:37:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:22.002 13:37:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.002 13:37:01 -- common/autotest_common.sh@1320 -- # shift 00:13:22.002 13:37:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:22.002 13:37:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.002 13:37:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.002 13:37:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:22.002 13:37:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:22.002 13:37:01 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:22.002 13:37:01 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:22.002 13:37:01 -- common/autotest_common.sh@1326 -- # break 00:13:22.002 13:37:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.002 13:37:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.262 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:22.262 fio-3.35 00:13:22.262 Starting 14 threads 00:13:34.493 00:13:34.493 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=113059: Wed Jul 10 13:37:13 2024 00:13:34.493 write: IOPS=153k, BW=598MiB/s (627MB/s)(5977MiB/10001msec); 0 zone resets 00:13:34.493 slat (usec): min=2, max=28055, avg=32.18, stdev=366.23 00:13:34.493 clat (usec): min=24, max=28291, avg=235.66, stdev=1042.40 00:13:34.493 lat (usec): min=35, max=28318, avg=267.84, stdev=1104.38 00:13:34.493 clat percentiles (usec): 00:13:34.493 | 50.000th=[ 155], 99.000th=[ 478], 99.900th=[16188], 99.990th=[20317], 00:13:34.493 | 99.999th=[28181] 00:13:34.493 bw ( KiB/s): min=422077, max=916690, per=100.00%, avg=614925.99, stdev=11276.58, samples=268 00:13:34.493 iops : min=105519, max=229174, avg=153731.42, stdev=2819.15, samples=268 00:13:34.493 trim: IOPS=153k, BW=598MiB/s (627MB/s)(5977MiB/10001msec); 0 zone resets 00:13:34.493 slat (usec): min=4, max=26675, avg=22.60, stdev=310.76 00:13:34.493 clat (usec): min=4, max=28318, avg=245.07, stdev=1024.60 00:13:34.493 lat (usec): min=14, max=28333, avg=267.67, stdev=1070.51 00:13:34.493 clat percentiles (usec): 00:13:34.493 | 50.000th=[ 172], 99.000th=[ 396], 99.900th=[16319], 99.990th=[20317], 00:13:34.493 | 99.999th=[28181] 00:13:34.493 bw ( KiB/s): min=422077, max=916698, per=100.00%, avg=614927.25, stdev=11276.87, samples=268 00:13:34.493 iops : min=105519, max=229174, avg=153731.53, stdev=2819.21, samples=268 00:13:34.493 lat (usec) : 10=0.06%, 20=0.16%, 50=0.77%, 100=11.36%, 250=78.79% 00:13:34.493 lat (usec) : 500=8.09%, 750=0.20%, 1000=0.03% 00:13:34.493 lat (msec) : 2=0.02%, 4=0.01%, 10=0.08%, 20=0.41%, 50=0.01% 00:13:34.493 cpu : usr=68.72%, sys=0.47%, ctx=160798, majf=0, minf=836 00:13:34.493 IO depths : 1=12.3%, 2=24.7%, 4=50.1%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.493 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.493 issued rwts: total=0,1530002,1530007,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.493 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:34.493 00:13:34.493 Run status group 0 (all jobs): 00:13:34.493 WRITE: bw=598MiB/s (627MB/s), 598MiB/s-598MiB/s (627MB/s-627MB/s), io=5977MiB (6267MB), run=10001-10001msec 00:13:34.493 TRIM: bw=598MiB/s (627MB/s), 598MiB/s-598MiB/s (627MB/s-627MB/s), io=5977MiB (6267MB), run=10001-10001msec 00:13:37.034 ----------------------------------------------------- 00:13:37.034 Suppressions used: 00:13:37.034 count bytes template 00:13:37.034 14 129 /usr/src/fio/parse.c 00:13:37.034 2 596 libcrypto.so 00:13:37.034 ----------------------------------------------------- 00:13:37.034 00:13:37.034 ************************************ 00:13:37.034 END TEST bdev_fio_trim 00:13:37.034 ************************************ 00:13:37.034 00:13:37.034 real 0m14.664s 00:13:37.034 user 1m42.300s 00:13:37.034 sys 0m1.505s 00:13:37.034 13:37:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.034 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:13:37.034 13:37:15 -- bdev/blockdev.sh@366 -- # rm -f 00:13:37.034 13:37:15 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:37.034 /home/vagrant/spdk_repo/spdk 00:13:37.034 ************************************ 00:13:37.034 END TEST bdev_fio 00:13:37.034 ************************************ 00:13:37.034 13:37:15 -- bdev/blockdev.sh@368 -- # popd 00:13:37.034 13:37:15 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:37.034 00:13:37.034 real 0m29.852s 00:13:37.034 user 3m21.099s 00:13:37.034 sys 0m5.646s 00:13:37.034 13:37:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.034 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:13:37.034 13:37:16 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:37.034 13:37:16 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:37.034 13:37:16 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:37.034 13:37:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.034 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:13:37.034 ************************************ 00:13:37.034 START TEST bdev_verify 00:13:37.035 ************************************ 00:13:37.035 13:37:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:37.035 [2024-07-10 13:37:16.106057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:37.035 [2024-07-10 13:37:16.106301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113274 ] 00:13:37.035 [2024-07-10 13:37:16.274312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.294 [2024-07-10 13:37:16.505993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.294 [2024-07-10 13:37:16.506000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.862 [2024-07-10 13:37:16.933590] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:37.862 [2024-07-10 13:37:16.933773] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:37.862 [2024-07-10 13:37:16.941521] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:37.862 [2024-07-10 13:37:16.941678] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:37.862 [2024-07-10 13:37:16.949543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:37.862 [2024-07-10 13:37:16.949672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:37.862 [2024-07-10 13:37:16.949733] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:37.862 [2024-07-10 13:37:17.164879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:37.862 [2024-07-10 13:37:17.165133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.862 [2024-07-10 13:37:17.165208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:37.862 [2024-07-10 13:37:17.165254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.862 [2024-07-10 13:37:17.167703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.862 [2024-07-10 13:37:17.167823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:38.431 Running I/O for 5 seconds... 00:13:43.732 00:13:43.732 Latency(us) 00:13:43.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.732 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x1000 00:13:43.732 Malloc0 : 5.18 1650.62 6.45 0.00 0.00 76942.75 2089.14 157515.35 00:13:43.732 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x1000 length 0x1000 00:13:43.732 Malloc0 : 5.23 1555.05 6.07 0.00 0.00 81955.11 2432.56 222536.22 00:13:43.732 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x800 00:13:43.732 Malloc1p0 : 5.18 1128.39 4.41 0.00 0.00 112299.39 4321.37 141946.97 00:13:43.732 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x800 length 0x800 00:13:43.732 Malloc1p0 : 5.24 1084.26 4.24 0.00 0.00 117332.27 4407.22 147441.69 00:13:43.732 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x800 00:13:43.732 Malloc1p1 : 5.18 1128.12 4.41 0.00 0.00 112155.78 4235.51 138283.82 00:13:43.732 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x800 length 0x800 00:13:43.732 Malloc1p1 : 5.24 1084.01 4.23 0.00 0.00 117141.25 4292.75 146525.90 00:13:43.732 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x200 00:13:43.732 Malloc2p0 : 5.18 1127.83 4.41 0.00 0.00 112005.13 4435.84 133704.89 00:13:43.732 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x200 length 0x200 00:13:43.732 Malloc2p0 : 5.24 1083.77 4.23 0.00 0.00 116969.55 4378.61 145610.12 00:13:43.732 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x200 00:13:43.732 Malloc2p1 : 5.18 1127.55 4.40 0.00 0.00 111836.49 4235.51 130041.74 00:13:43.732 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x200 length 0x200 00:13:43.732 Malloc2p1 : 5.24 1083.52 4.23 0.00 0.00 116766.90 4235.51 145610.12 00:13:43.732 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x200 00:13:43.732 Malloc2p2 : 5.19 1127.29 4.40 0.00 0.00 111712.99 4063.80 125462.81 00:13:43.732 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x200 length 0x200 00:13:43.732 Malloc2p2 : 5.24 1083.28 4.23 0.00 0.00 116619.17 4063.80 146525.90 00:13:43.732 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.732 Verification LBA range: start 0x0 length 0x200 00:13:43.732 Malloc2p3 : 5.19 1127.01 4.40 0.00 0.00 111562.77 4264.13 121799.66 00:13:43.733 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x200 length 0x200 00:13:43.733 Malloc2p3 : 5.24 1083.04 4.23 0.00 0.00 116443.63 4435.84 148357.48 00:13:43.733 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x200 00:13:43.733 Malloc2p4 : 5.19 1126.72 4.40 0.00 0.00 111404.78 3892.09 118136.51 00:13:43.733 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x200 length 0x200 00:13:43.733 Malloc2p4 : 5.24 1082.79 4.23 0.00 0.00 116258.99 4006.57 148357.48 00:13:43.733 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x200 00:13:43.733 Malloc2p5 : 5.19 1126.45 4.40 0.00 0.00 111246.50 4378.61 113557.58 00:13:43.733 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x200 length 0x200 00:13:43.733 Malloc2p5 : 5.24 1082.54 4.23 0.00 0.00 116085.78 4206.90 148357.48 00:13:43.733 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x200 00:13:43.733 Malloc2p6 : 5.19 1126.16 4.40 0.00 0.00 111105.28 4292.75 108978.64 00:13:43.733 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x200 length 0x200 00:13:43.733 Malloc2p6 : 5.25 1082.30 4.23 0.00 0.00 115898.74 4235.51 147441.69 00:13:43.733 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x200 00:13:43.733 Malloc2p7 : 5.19 1125.87 4.40 0.00 0.00 110947.74 4149.66 104399.71 00:13:43.733 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x200 length 0x200 00:13:43.733 Malloc2p7 : 5.25 1082.06 4.23 0.00 0.00 115719.53 4063.80 146525.90 00:13:43.733 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x1000 00:13:43.733 TestPT : 5.19 1117.08 4.36 0.00 0.00 111698.96 6610.84 104857.60 00:13:43.733 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x1000 length 0x1000 00:13:43.733 TestPT : 5.25 1049.98 4.10 0.00 0.00 119010.59 18888.10 169420.58 00:13:43.733 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x2000 00:13:43.733 raid0 : 5.20 1140.54 4.46 0.00 0.00 109446.28 4378.61 93868.16 00:13:43.733 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x2000 length 0x2000 00:13:43.733 raid0 : 5.25 1081.53 4.22 0.00 0.00 115298.30 4407.22 146525.90 00:13:43.733 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x2000 00:13:43.733 concat0 : 5.21 1140.25 4.45 0.00 0.00 109296.56 4435.84 92952.37 00:13:43.733 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x2000 length 0x2000 00:13:43.733 concat0 : 5.25 1081.29 4.22 0.00 0.00 115112.13 4464.46 146525.90 00:13:43.733 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x1000 00:13:43.733 raid1 : 5.21 1139.92 4.45 0.00 0.00 109137.33 5237.16 92036.58 00:13:43.733 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x1000 length 0x1000 00:13:43.733 raid1 : 5.25 1081.02 4.22 0.00 0.00 114917.61 5265.77 146525.90 00:13:43.733 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x0 length 0x4e2 00:13:43.733 AIO0 : 5.21 1135.85 4.44 0.00 0.00 109322.52 2575.65 91120.80 00:13:43.733 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.733 Verification LBA range: start 0x4e2 length 0x4e2 00:13:43.733 AIO0 : 5.25 1073.75 4.19 0.00 0.00 115453.94 3605.91 147441.69 00:13:43.733 =================================================================================================================== 00:13:43.733 Total : 36349.84 141.99 0.00 0.00 110608.15 2089.14 222536.22 00:13:47.008 ************************************ 00:13:47.008 END TEST bdev_verify 00:13:47.008 ************************************ 00:13:47.008 00:13:47.008 real 0m9.729s 00:13:47.008 user 0m17.617s 00:13:47.008 sys 0m0.572s 00:13:47.008 13:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.008 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:13:47.008 13:37:25 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:47.008 13:37:25 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:47.008 13:37:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.008 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:13:47.008 ************************************ 00:13:47.008 START TEST bdev_verify_big_io 00:13:47.008 ************************************ 00:13:47.008 13:37:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:47.008 [2024-07-10 13:37:25.861135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:47.008 [2024-07-10 13:37:25.861488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113426 ] 00:13:47.008 [2024-07-10 13:37:26.047485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.008 [2024-07-10 13:37:26.313776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.008 [2024-07-10 13:37:26.313778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.575 [2024-07-10 13:37:26.771927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.575 [2024-07-10 13:37:26.772121] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.575 [2024-07-10 13:37:26.779888] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.575 [2024-07-10 13:37:26.779989] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.575 [2024-07-10 13:37:26.787917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.575 [2024-07-10 13:37:26.787981] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:47.575 [2024-07-10 13:37:26.788047] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:47.833 [2024-07-10 13:37:27.021511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.833 [2024-07-10 13:37:27.021772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.833 [2024-07-10 13:37:27.021854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:47.833 [2024-07-10 13:37:27.021902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.833 [2024-07-10 13:37:27.024923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.833 [2024-07-10 13:37:27.025010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:48.400 [2024-07-10 13:37:27.470399] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.474680] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.479749] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.484706] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.489054] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.494049] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.497924] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.502561] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.506825] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.511998] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.516026] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.520624] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.524516] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.529126] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.534358] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.538496] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:48.400 [2024-07-10 13:37:27.644944] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:48.400 [2024-07-10 13:37:27.654347] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:48.400 Running I/O for 5 seconds... 00:13:54.967 00:13:54.967 Latency(us) 00:13:54.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.967 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x100 00:13:54.967 Malloc0 : 5.43 369.49 23.09 0.00 0.00 337747.01 20948.63 1479911.63 00:13:54.967 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x100 length 0x100 00:13:54.967 Malloc0 : 5.60 312.39 19.52 0.00 0.00 398865.01 20605.21 1721679.37 00:13:54.967 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x80 00:13:54.967 Malloc1p0 : 5.49 330.51 20.66 0.00 0.00 372314.58 34570.96 575114.17 00:13:54.967 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x80 length 0x80 00:13:54.967 Malloc1p0 : 5.76 168.89 10.56 0.00 0.00 712084.14 66852.44 2329761.87 00:13:54.967 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x80 00:13:54.967 Malloc1p1 : 5.62 147.47 9.22 0.00 0.00 823227.80 33426.22 1904836.75 00:13:54.967 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x80 length 0x80 00:13:54.967 Malloc1p1 : 5.99 111.47 6.97 0.00 0.00 1037384.34 75094.53 2197888.56 00:13:54.967 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x20 00:13:54.967 Malloc2p0 : 5.49 86.33 5.40 0.00 0.00 353136.71 5780.90 501851.22 00:13:54.967 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x20 length 0x20 00:13:54.967 Malloc2p0 : 5.69 64.72 4.04 0.00 0.00 449674.00 12076.94 934102.64 00:13:54.967 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x20 00:13:54.967 Malloc2p1 : 5.49 86.32 5.39 0.00 0.00 352019.93 5924.00 490861.78 00:13:54.967 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x20 length 0x20 00:13:54.967 Malloc2p1 : 5.69 64.70 4.04 0.00 0.00 446159.15 13507.86 904797.46 00:13:54.967 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x20 00:13:54.967 Malloc2p2 : 5.49 86.30 5.39 0.00 0.00 350840.03 6095.71 483535.48 00:13:54.967 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x20 length 0x20 00:13:54.967 Malloc2p2 : 5.76 67.75 4.23 0.00 0.00 425071.37 13565.09 879155.42 00:13:54.967 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x20 00:13:54.967 Malloc2p3 : 5.49 86.29 5.39 0.00 0.00 349816.77 5408.87 472546.04 00:13:54.967 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x20 length 0x20 00:13:54.967 Malloc2p3 : 5.76 67.73 4.23 0.00 0.00 422013.54 12134.18 853513.39 00:13:54.967 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x0 length 0x20 00:13:54.967 Malloc2p4 : 5.49 86.27 5.39 0.00 0.00 348709.14 5895.38 461556.60 00:13:54.967 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.967 Verification LBA range: start 0x20 length 0x20 00:13:54.967 Malloc2p4 : 5.76 67.72 4.23 0.00 0.00 419314.53 9901.95 835197.65 00:13:54.967 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x20 00:13:54.968 Malloc2p5 : 5.50 86.26 5.39 0.00 0.00 347558.94 5780.90 450567.15 00:13:54.968 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x20 length 0x20 00:13:54.968 Malloc2p5 : 5.83 69.92 4.37 0.00 0.00 402981.52 10531.55 809555.62 00:13:54.968 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x20 00:13:54.968 Malloc2p6 : 5.50 86.24 5.39 0.00 0.00 346400.08 6467.74 439577.71 00:13:54.968 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x20 length 0x20 00:13:54.968 Malloc2p6 : 5.84 69.91 4.37 0.00 0.00 400239.49 10703.26 787576.73 00:13:54.968 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x20 00:13:54.968 Malloc2p7 : 5.50 86.23 5.39 0.00 0.00 345269.23 5838.14 428588.27 00:13:54.968 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x20 length 0x20 00:13:54.968 Malloc2p7 : 5.90 72.70 4.54 0.00 0.00 383374.76 10588.79 761934.70 00:13:54.968 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x100 00:13:54.968 TestPT : 5.62 142.59 8.91 0.00 0.00 824559.05 40981.46 1904836.75 00:13:54.968 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x100 length 0x100 00:13:54.968 TestPT : 5.90 148.73 9.30 0.00 0.00 742824.17 48078.81 1860878.98 00:13:54.968 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x200 00:13:54.968 raid0 : 5.65 157.31 9.83 0.00 0.00 742566.94 32739.38 1904836.75 00:13:54.968 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x200 length 0x200 00:13:54.968 raid0 : 5.95 156.29 9.77 0.00 0.00 696708.64 43270.93 2080667.84 00:13:54.968 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x200 00:13:54.968 concat0 : 5.66 163.03 10.19 0.00 0.00 708974.01 32281.49 1919489.34 00:13:54.968 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x200 length 0x200 00:13:54.968 concat0 : 6.07 208.99 13.06 0.00 0.00 509404.70 27130.19 2051362.66 00:13:54.968 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x100 00:13:54.968 raid1 : 5.66 168.93 10.56 0.00 0.00 676552.48 19002.58 1934141.93 00:13:54.968 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x100 length 0x100 00:13:54.968 raid1 : 6.16 248.66 15.54 0.00 0.00 418587.62 10130.89 2007404.88 00:13:54.968 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x0 length 0x4e 00:13:54.968 AIO0 : 5.66 176.40 11.02 0.00 0.00 391954.26 2604.27 1135575.76 00:13:54.968 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:54.968 Verification LBA range: start 0x4e length 0x4e 00:13:54.968 AIO0 : 6.26 284.11 17.76 0.00 0.00 218620.34 579.52 1120923.17 00:13:54.968 =================================================================================================================== 00:13:54.968 Total : 4530.65 283.17 0.00 0.00 491517.09 579.52 2329761.87 00:13:57.500 ************************************ 00:13:57.500 END TEST bdev_verify_big_io 00:13:57.500 ************************************ 00:13:57.500 00:13:57.500 real 0m10.920s 00:13:57.500 user 0m19.951s 00:13:57.500 sys 0m0.688s 00:13:57.500 13:37:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.500 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:13:57.500 13:37:36 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.500 13:37:36 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:57.500 13:37:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:57.500 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:13:57.500 ************************************ 00:13:57.500 START TEST bdev_write_zeroes 00:13:57.500 ************************************ 00:13:57.500 13:37:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.500 [2024-07-10 13:37:36.837117] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:57.500 [2024-07-10 13:37:36.837334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113598 ] 00:13:57.759 [2024-07-10 13:37:37.002030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.057 [2024-07-10 13:37:37.203105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.317 [2024-07-10 13:37:37.575041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:58.317 [2024-07-10 13:37:37.575193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:58.317 [2024-07-10 13:37:37.583025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:58.317 [2024-07-10 13:37:37.583133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:58.317 [2024-07-10 13:37:37.591016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:58.317 [2024-07-10 13:37:37.591081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:58.317 [2024-07-10 13:37:37.591120] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:58.575 [2024-07-10 13:37:37.785436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:58.575 [2024-07-10 13:37:37.785603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.575 [2024-07-10 13:37:37.785656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:58.575 [2024-07-10 13:37:37.785696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.575 [2024-07-10 13:37:37.787619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.575 [2024-07-10 13:37:37.787699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:58.834 Running I/O for 1 seconds... 00:14:00.210 00:14:00.210 Latency(us) 00:14:00.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.210 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc0 : 1.04 6045.07 23.61 0.00 0.00 21160.41 590.25 36860.42 00:14:00.210 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc1p0 : 1.04 6036.72 23.58 0.00 0.00 21155.00 804.89 36173.58 00:14:00.210 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc1p1 : 1.04 6029.40 23.55 0.00 0.00 21146.37 754.81 35486.74 00:14:00.210 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p0 : 1.04 6022.50 23.53 0.00 0.00 21125.77 801.31 34799.90 00:14:00.210 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p1 : 1.04 6015.09 23.50 0.00 0.00 21111.23 747.65 34113.06 00:14:00.210 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p2 : 1.04 6007.96 23.47 0.00 0.00 21098.90 794.16 33884.12 00:14:00.210 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p3 : 1.05 6001.56 23.44 0.00 0.00 21077.56 747.65 33426.22 00:14:00.210 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p4 : 1.05 5994.61 23.42 0.00 0.00 21069.50 754.81 32739.38 00:14:00.210 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p5 : 1.05 5987.92 23.39 0.00 0.00 21054.91 804.89 32052.54 00:14:00.210 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p6 : 1.05 5981.47 23.37 0.00 0.00 21033.22 815.62 31136.75 00:14:00.210 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 Malloc2p7 : 1.05 5975.14 23.34 0.00 0.00 21022.63 797.74 30449.91 00:14:00.210 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 TestPT : 1.05 5968.53 23.31 0.00 0.00 21003.90 801.31 29763.07 00:14:00.210 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 raid0 : 1.05 5960.71 23.28 0.00 0.00 20981.69 1387.99 28274.92 00:14:00.210 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.210 concat0 : 1.05 5953.36 23.26 0.00 0.00 20943.37 1373.68 27015.71 00:14:00.210 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.211 raid1 : 1.06 5944.06 23.22 0.00 0.00 20899.31 2203.61 25184.14 00:14:00.211 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.211 AIO0 : 1.06 5931.18 23.17 0.00 0.00 20854.50 1166.20 24153.88 00:14:00.211 =================================================================================================================== 00:14:00.211 Total : 95855.28 374.43 0.00 0.00 21046.16 590.25 36860.42 00:14:02.745 00:14:02.745 real 0m4.732s 00:14:02.745 user 0m4.157s 00:14:02.745 sys 0m0.377s 00:14:02.746 13:37:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.746 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:14:02.746 ************************************ 00:14:02.746 END TEST bdev_write_zeroes 00:14:02.746 ************************************ 00:14:02.746 13:37:41 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.746 13:37:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:02.746 13:37:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.746 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:14:02.746 ************************************ 00:14:02.746 START TEST bdev_json_nonenclosed 00:14:02.746 ************************************ 00:14:02.746 13:37:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.746 [2024-07-10 13:37:41.634232] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:02.746 [2024-07-10 13:37:41.634415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113699 ] 00:14:02.746 [2024-07-10 13:37:41.790809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.746 [2024-07-10 13:37:41.978451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.746 [2024-07-10 13:37:41.978720] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:02.746 [2024-07-10 13:37:41.978793] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.387 00:14:03.387 real 0m0.816s 00:14:03.387 user 0m0.584s 00:14:03.387 sys 0m0.132s 00:14:03.387 13:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.387 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:03.387 ************************************ 00:14:03.387 END TEST bdev_json_nonenclosed 00:14:03.387 ************************************ 00:14:03.387 13:37:42 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.387 13:37:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:14:03.387 13:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:03.387 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:14:03.387 ************************************ 00:14:03.387 START TEST bdev_json_nonarray 00:14:03.387 ************************************ 00:14:03.387 13:37:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.387 [2024-07-10 13:37:42.515715] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:03.387 [2024-07-10 13:37:42.515945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113722 ] 00:14:03.387 [2024-07-10 13:37:42.674054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.646 [2024-07-10 13:37:42.897249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.646 [2024-07-10 13:37:42.897500] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:03.646 [2024-07-10 13:37:42.897562] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.215 ************************************ 00:14:04.215 END TEST bdev_json_nonarray 00:14:04.215 ************************************ 00:14:04.215 00:14:04.215 real 0m0.873s 00:14:04.215 user 0m0.640s 00:14:04.215 sys 0m0.130s 00:14:04.215 13:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.215 13:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:04.215 13:37:43 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:14:04.215 13:37:43 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:14:04.215 13:37:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:04.215 13:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.215 13:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:04.215 ************************************ 00:14:04.215 START TEST bdev_qos 00:14:04.215 ************************************ 00:14:04.215 13:37:43 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:14:04.215 13:37:43 -- bdev/blockdev.sh@444 -- # QOS_PID=113760 00:14:04.215 13:37:43 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:14:04.215 13:37:43 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 113760' 00:14:04.215 Process qos testing pid: 113760 00:14:04.215 13:37:43 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:14:04.215 13:37:43 -- bdev/blockdev.sh@447 -- # waitforlisten 113760 00:14:04.215 13:37:43 -- common/autotest_common.sh@819 -- # '[' -z 113760 ']' 00:14:04.215 13:37:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.215 13:37:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.215 13:37:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.215 13:37:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.215 13:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:04.215 [2024-07-10 13:37:43.455226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:04.215 [2024-07-10 13:37:43.455413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113760 ] 00:14:04.475 [2024-07-10 13:37:43.612707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.475 [2024-07-10 13:37:43.802321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.043 13:37:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.043 13:37:44 -- common/autotest_common.sh@852 -- # return 0 00:14:05.043 13:37:44 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:05.043 13:37:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.043 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:05.302 Malloc_0 00:14:05.302 13:37:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.302 13:37:44 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:14:05.302 13:37:44 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:14:05.302 13:37:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:05.302 13:37:44 -- common/autotest_common.sh@889 -- # local i 00:14:05.302 13:37:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:05.302 13:37:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:05.302 13:37:44 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:05.302 13:37:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.302 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:05.302 13:37:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.302 13:37:44 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:05.302 13:37:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.302 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:05.302 [ 00:14:05.302 { 00:14:05.302 "name": "Malloc_0", 00:14:05.302 "aliases": [ 00:14:05.302 "c9371153-c857-4847-abe8-40f7b5101e5e" 00:14:05.302 ], 00:14:05.302 "product_name": "Malloc disk", 00:14:05.303 "block_size": 512, 00:14:05.303 "num_blocks": 262144, 00:14:05.303 "uuid": "c9371153-c857-4847-abe8-40f7b5101e5e", 00:14:05.303 "assigned_rate_limits": { 00:14:05.303 "rw_ios_per_sec": 0, 00:14:05.303 "rw_mbytes_per_sec": 0, 00:14:05.303 "r_mbytes_per_sec": 0, 00:14:05.303 "w_mbytes_per_sec": 0 00:14:05.303 }, 00:14:05.303 "claimed": false, 00:14:05.303 "zoned": false, 00:14:05.303 "supported_io_types": { 00:14:05.303 "read": true, 00:14:05.303 "write": true, 00:14:05.303 "unmap": true, 00:14:05.303 "write_zeroes": true, 00:14:05.303 "flush": true, 00:14:05.303 "reset": true, 00:14:05.303 "compare": false, 00:14:05.303 "compare_and_write": false, 00:14:05.303 "abort": true, 00:14:05.303 "nvme_admin": false, 00:14:05.303 "nvme_io": false 00:14:05.303 }, 00:14:05.303 "memory_domains": [ 00:14:05.303 { 00:14:05.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.303 "dma_device_type": 2 00:14:05.303 } 00:14:05.303 ], 00:14:05.303 "driver_specific": {} 00:14:05.303 } 00:14:05.303 ] 00:14:05.303 13:37:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.303 13:37:44 -- common/autotest_common.sh@895 -- # return 0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:05.303 13:37:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.303 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 Null_1 00:14:05.303 13:37:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.303 13:37:44 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:14:05.303 13:37:44 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:14:05.303 13:37:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:05.303 13:37:44 -- common/autotest_common.sh@889 -- # local i 00:14:05.303 13:37:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:05.303 13:37:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:05.303 13:37:44 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:05.303 13:37:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.303 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 13:37:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.303 13:37:44 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:05.303 13:37:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.303 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 [ 00:14:05.303 { 00:14:05.303 "name": "Null_1", 00:14:05.303 "aliases": [ 00:14:05.303 "05fac366-c8ff-4897-8296-86aadde884f8" 00:14:05.303 ], 00:14:05.303 "product_name": "Null disk", 00:14:05.303 "block_size": 512, 00:14:05.303 "num_blocks": 262144, 00:14:05.303 "uuid": "05fac366-c8ff-4897-8296-86aadde884f8", 00:14:05.303 "assigned_rate_limits": { 00:14:05.303 "rw_ios_per_sec": 0, 00:14:05.303 "rw_mbytes_per_sec": 0, 00:14:05.303 "r_mbytes_per_sec": 0, 00:14:05.303 "w_mbytes_per_sec": 0 00:14:05.303 }, 00:14:05.303 "claimed": false, 00:14:05.303 "zoned": false, 00:14:05.303 "supported_io_types": { 00:14:05.303 "read": true, 00:14:05.303 "write": true, 00:14:05.303 "unmap": false, 00:14:05.303 "write_zeroes": true, 00:14:05.303 "flush": false, 00:14:05.303 "reset": true, 00:14:05.303 "compare": false, 00:14:05.303 "compare_and_write": false, 00:14:05.303 "abort": true, 00:14:05.303 "nvme_admin": false, 00:14:05.303 "nvme_io": false 00:14:05.303 }, 00:14:05.303 "driver_specific": {} 00:14:05.303 } 00:14:05.303 ] 00:14:05.303 13:37:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.303 13:37:44 -- common/autotest_common.sh@895 -- # return 0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@455 -- # qos_function_test 00:14:05.303 13:37:44 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.303 13:37:44 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:14:05.303 13:37:44 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:14:05.303 13:37:44 -- bdev/blockdev.sh@410 -- # local io_result=0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:05.303 13:37:44 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:05.303 13:37:44 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:05.303 13:37:44 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:05.303 13:37:44 -- bdev/blockdev.sh@376 -- # tail -1 00:14:05.303 Running I/O for 60 seconds... 00:14:10.615 13:37:49 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 85726.82 342907.27 0.00 0.00 348160.00 0.00 0.00 ' 00:14:10.615 13:37:49 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:10.615 13:37:49 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:10.615 13:37:49 -- bdev/blockdev.sh@378 -- # iostat_result=85726.82 00:14:10.615 13:37:49 -- bdev/blockdev.sh@383 -- # echo 85726 00:14:10.615 13:37:49 -- bdev/blockdev.sh@414 -- # io_result=85726 00:14:10.615 13:37:49 -- bdev/blockdev.sh@416 -- # iops_limit=21000 00:14:10.615 13:37:49 -- bdev/blockdev.sh@417 -- # '[' 21000 -gt 1000 ']' 00:14:10.615 13:37:49 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:14:10.615 13:37:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.615 13:37:49 -- common/autotest_common.sh@10 -- # set +x 00:14:10.615 13:37:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.615 13:37:49 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:14:10.615 13:37:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:10.615 13:37:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.615 13:37:49 -- common/autotest_common.sh@10 -- # set +x 00:14:10.615 ************************************ 00:14:10.615 START TEST bdev_qos_iops 00:14:10.615 ************************************ 00:14:10.616 13:37:49 -- common/autotest_common.sh@1104 -- # run_qos_test 21000 IOPS Malloc_0 00:14:10.616 13:37:49 -- bdev/blockdev.sh@387 -- # local qos_limit=21000 00:14:10.616 13:37:49 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:10.616 13:37:49 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:14:10.616 13:37:49 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:14:10.616 13:37:49 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:10.616 13:37:49 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:10.616 13:37:49 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:10.616 13:37:49 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:10.616 13:37:49 -- bdev/blockdev.sh@376 -- # tail -1 00:14:15.883 13:37:54 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 20991.32 83965.30 0.00 0.00 85596.00 0.00 0.00 ' 00:14:15.883 13:37:54 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:15.883 13:37:54 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:15.883 13:37:54 -- bdev/blockdev.sh@378 -- # iostat_result=20991.32 00:14:15.883 13:37:54 -- bdev/blockdev.sh@383 -- # echo 20991 00:14:15.883 ************************************ 00:14:15.883 END TEST bdev_qos_iops 00:14:15.883 ************************************ 00:14:15.883 13:37:54 -- bdev/blockdev.sh@390 -- # qos_result=20991 00:14:15.883 13:37:54 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:14:15.883 13:37:54 -- bdev/blockdev.sh@394 -- # lower_limit=18900 00:14:15.883 13:37:54 -- bdev/blockdev.sh@395 -- # upper_limit=23100 00:14:15.883 13:37:54 -- bdev/blockdev.sh@398 -- # '[' 20991 -lt 18900 ']' 00:14:15.883 13:37:54 -- bdev/blockdev.sh@398 -- # '[' 20991 -gt 23100 ']' 00:14:15.883 00:14:15.883 real 0m5.192s 00:14:15.883 user 0m0.107s 00:14:15.883 sys 0m0.028s 00:14:15.883 13:37:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.883 13:37:54 -- common/autotest_common.sh@10 -- # set +x 00:14:15.883 13:37:54 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:14:15.883 13:37:54 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:15.883 13:37:54 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:15.883 13:37:54 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:15.883 13:37:54 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:15.883 13:37:54 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:15.883 13:37:54 -- bdev/blockdev.sh@376 -- # tail -1 00:14:21.157 13:38:00 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 27538.46 110153.84 0.00 0.00 111616.00 0.00 0.00 ' 00:14:21.157 13:38:00 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:21.157 13:38:00 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.157 13:38:00 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:21.157 13:38:00 -- bdev/blockdev.sh@380 -- # iostat_result=111616.00 00:14:21.157 13:38:00 -- bdev/blockdev.sh@383 -- # echo 111616 00:14:21.157 13:38:00 -- bdev/blockdev.sh@425 -- # bw_limit=111616 00:14:21.157 13:38:00 -- bdev/blockdev.sh@426 -- # bw_limit=10 00:14:21.157 13:38:00 -- bdev/blockdev.sh@427 -- # '[' 10 -lt 2 ']' 00:14:21.157 13:38:00 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:14:21.157 13:38:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.157 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:14:21.157 13:38:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.157 13:38:00 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:14:21.157 13:38:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:21.157 13:38:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.157 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:14:21.157 ************************************ 00:14:21.157 START TEST bdev_qos_bw 00:14:21.157 ************************************ 00:14:21.157 13:38:00 -- common/autotest_common.sh@1104 -- # run_qos_test 10 BANDWIDTH Null_1 00:14:21.157 13:38:00 -- bdev/blockdev.sh@387 -- # local qos_limit=10 00:14:21.157 13:38:00 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:21.158 13:38:00 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:21.158 13:38:00 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:21.158 13:38:00 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:21.158 13:38:00 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:21.158 13:38:00 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:21.158 13:38:00 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:21.158 13:38:00 -- bdev/blockdev.sh@376 -- # tail -1 00:14:26.432 13:38:05 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2560.32 10241.28 0.00 0.00 10496.00 0.00 0.00 ' 00:14:26.432 13:38:05 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:26.432 13:38:05 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:26.432 13:38:05 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:26.432 13:38:05 -- bdev/blockdev.sh@380 -- # iostat_result=10496.00 00:14:26.432 13:38:05 -- bdev/blockdev.sh@383 -- # echo 10496 00:14:26.432 ************************************ 00:14:26.432 END TEST bdev_qos_bw 00:14:26.432 ************************************ 00:14:26.432 13:38:05 -- bdev/blockdev.sh@390 -- # qos_result=10496 00:14:26.432 13:38:05 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:26.432 13:38:05 -- bdev/blockdev.sh@392 -- # qos_limit=10240 00:14:26.432 13:38:05 -- bdev/blockdev.sh@394 -- # lower_limit=9216 00:14:26.432 13:38:05 -- bdev/blockdev.sh@395 -- # upper_limit=11264 00:14:26.432 13:38:05 -- bdev/blockdev.sh@398 -- # '[' 10496 -lt 9216 ']' 00:14:26.432 13:38:05 -- bdev/blockdev.sh@398 -- # '[' 10496 -gt 11264 ']' 00:14:26.432 00:14:26.432 real 0m5.213s 00:14:26.432 user 0m0.093s 00:14:26.432 sys 0m0.036s 00:14:26.432 13:38:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.432 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.432 13:38:05 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:26.432 13:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.432 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.432 13:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.432 13:38:05 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:26.432 13:38:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:26.432 13:38:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.432 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.432 ************************************ 00:14:26.432 START TEST bdev_qos_ro_bw 00:14:26.432 ************************************ 00:14:26.432 13:38:05 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:26.432 13:38:05 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:26.432 13:38:05 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:26.432 13:38:05 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:26.432 13:38:05 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:26.432 13:38:05 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:26.432 13:38:05 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:26.432 13:38:05 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:26.432 13:38:05 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:26.432 13:38:05 -- bdev/blockdev.sh@376 -- # tail -1 00:14:31.709 13:38:10 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.88 2047.52 0.00 0.00 2068.00 0.00 0.00 ' 00:14:31.709 13:38:10 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:31.709 13:38:10 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:31.709 13:38:10 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:31.709 13:38:10 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:14:31.709 13:38:10 -- bdev/blockdev.sh@383 -- # echo 2068 00:14:31.709 ************************************ 00:14:31.709 END TEST bdev_qos_ro_bw 00:14:31.709 ************************************ 00:14:31.709 13:38:10 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:14:31.709 13:38:10 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:31.709 13:38:10 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:31.709 13:38:10 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:31.709 13:38:10 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:31.709 13:38:10 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:14:31.709 13:38:10 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:14:31.709 00:14:31.709 real 0m5.158s 00:14:31.709 user 0m0.117s 00:14:31.709 sys 0m0.017s 00:14:31.709 13:38:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.709 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:31.709 13:38:10 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:31.709 13:38:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.709 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:14:31.969 13:38:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.969 13:38:11 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:31.969 13:38:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.969 13:38:11 -- common/autotest_common.sh@10 -- # set +x 00:14:32.229 00:14:32.229 Latency(us) 00:14:32.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.229 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:32.229 Malloc_0 : 26.65 28957.76 113.12 0.00 0.00 8755.22 1681.33 505514.37 00:14:32.229 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:32.229 Null_1 : 26.87 28835.45 112.64 0.00 0.00 8856.42 525.86 223452.00 00:14:32.229 =================================================================================================================== 00:14:32.229 Total : 57793.20 225.75 0.00 0.00 8805.92 525.86 505514.37 00:14:32.229 0 00:14:32.229 13:38:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.229 13:38:11 -- bdev/blockdev.sh@459 -- # killprocess 113760 00:14:32.229 13:38:11 -- common/autotest_common.sh@926 -- # '[' -z 113760 ']' 00:14:32.229 13:38:11 -- common/autotest_common.sh@930 -- # kill -0 113760 00:14:32.229 13:38:11 -- common/autotest_common.sh@931 -- # uname 00:14:32.229 13:38:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.229 13:38:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113760 00:14:32.229 13:38:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:32.229 13:38:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:32.229 13:38:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113760' 00:14:32.229 killing process with pid 113760 00:14:32.229 13:38:11 -- common/autotest_common.sh@945 -- # kill 113760 00:14:32.229 Received shutdown signal, test time was about 26.912164 seconds 00:14:32.229 00:14:32.229 Latency(us) 00:14:32.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.229 =================================================================================================================== 00:14:32.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.229 13:38:11 -- common/autotest_common.sh@950 -- # wait 113760 00:14:33.610 ************************************ 00:14:33.610 END TEST bdev_qos 00:14:33.610 ************************************ 00:14:33.610 13:38:12 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:33.610 00:14:33.610 real 0m29.469s 00:14:33.610 user 0m30.106s 00:14:33.610 sys 0m0.571s 00:14:33.610 13:38:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.610 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:14:33.610 13:38:12 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:33.610 13:38:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.610 13:38:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.610 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:14:33.610 ************************************ 00:14:33.610 START TEST bdev_qd_sampling 00:14:33.610 ************************************ 00:14:33.610 13:38:12 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:33.610 13:38:12 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:33.610 13:38:12 -- bdev/blockdev.sh@539 -- # QD_PID=114287 00:14:33.610 13:38:12 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 114287' 00:14:33.610 Process bdev QD sampling period testing pid: 114287 00:14:33.610 13:38:12 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:33.610 13:38:12 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:33.610 13:38:12 -- bdev/blockdev.sh@542 -- # waitforlisten 114287 00:14:33.610 13:38:12 -- common/autotest_common.sh@819 -- # '[' -z 114287 ']' 00:14:33.610 13:38:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.610 13:38:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.610 13:38:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.610 13:38:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.610 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:14:33.868 [2024-07-10 13:38:12.984766] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:33.868 [2024-07-10 13:38:12.984968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114287 ] 00:14:33.868 [2024-07-10 13:38:13.148362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:34.126 [2024-07-10 13:38:13.340356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.126 [2024-07-10 13:38:13.340362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.694 13:38:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.694 13:38:13 -- common/autotest_common.sh@852 -- # return 0 00:14:34.694 13:38:13 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:34.694 13:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.694 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 Malloc_QD 00:14:34.694 13:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.694 13:38:13 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:34.694 13:38:13 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:34.694 13:38:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:34.694 13:38:13 -- common/autotest_common.sh@889 -- # local i 00:14:34.694 13:38:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:34.694 13:38:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:34.694 13:38:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:34.694 13:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.694 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 13:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.694 13:38:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:34.694 13:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.694 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 [ 00:14:34.694 { 00:14:34.694 "name": "Malloc_QD", 00:14:34.694 "aliases": [ 00:14:34.694 "79afa38e-a7d3-4431-981c-97d3eb82fa8c" 00:14:34.694 ], 00:14:34.694 "product_name": "Malloc disk", 00:14:34.694 "block_size": 512, 00:14:34.694 "num_blocks": 262144, 00:14:34.694 "uuid": "79afa38e-a7d3-4431-981c-97d3eb82fa8c", 00:14:34.694 "assigned_rate_limits": { 00:14:34.694 "rw_ios_per_sec": 0, 00:14:34.694 "rw_mbytes_per_sec": 0, 00:14:34.694 "r_mbytes_per_sec": 0, 00:14:34.694 "w_mbytes_per_sec": 0 00:14:34.694 }, 00:14:34.694 "claimed": false, 00:14:34.694 "zoned": false, 00:14:34.694 "supported_io_types": { 00:14:34.694 "read": true, 00:14:34.694 "write": true, 00:14:34.694 "unmap": true, 00:14:34.694 "write_zeroes": true, 00:14:34.694 "flush": true, 00:14:34.694 "reset": true, 00:14:34.694 "compare": false, 00:14:34.694 "compare_and_write": false, 00:14:34.694 "abort": true, 00:14:34.694 "nvme_admin": false, 00:14:34.694 "nvme_io": false 00:14:34.694 }, 00:14:34.694 "memory_domains": [ 00:14:34.694 { 00:14:34.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.694 "dma_device_type": 2 00:14:34.694 } 00:14:34.694 ], 00:14:34.694 "driver_specific": {} 00:14:34.694 } 00:14:34.694 ] 00:14:34.694 13:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.694 13:38:13 -- common/autotest_common.sh@895 -- # return 0 00:14:34.694 13:38:13 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:34.694 13:38:13 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.953 Running I/O for 5 seconds... 00:14:36.863 13:38:15 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:36.863 13:38:15 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:36.863 13:38:15 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:36.863 13:38:15 -- bdev/blockdev.sh@519 -- # local iostats 00:14:36.863 13:38:15 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:36.863 13:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.863 13:38:15 -- common/autotest_common.sh@10 -- # set +x 00:14:36.863 13:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.863 13:38:16 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:36.863 13:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.863 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:14:36.863 13:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.863 13:38:16 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:36.863 "tick_rate": 2290000000, 00:14:36.863 "ticks": 1863902468208, 00:14:36.863 "bdevs": [ 00:14:36.863 { 00:14:36.863 "name": "Malloc_QD", 00:14:36.863 "bytes_read": 904958464, 00:14:36.863 "num_read_ops": 220931, 00:14:36.863 "bytes_written": 0, 00:14:36.863 "num_write_ops": 0, 00:14:36.863 "bytes_unmapped": 0, 00:14:36.863 "num_unmap_ops": 0, 00:14:36.863 "bytes_copied": 0, 00:14:36.863 "num_copy_ops": 0, 00:14:36.863 "read_latency_ticks": 2270801178450, 00:14:36.863 "max_read_latency_ticks": 18297550, 00:14:36.863 "min_read_latency_ticks": 335334, 00:14:36.863 "write_latency_ticks": 0, 00:14:36.863 "max_write_latency_ticks": 0, 00:14:36.863 "min_write_latency_ticks": 0, 00:14:36.863 "unmap_latency_ticks": 0, 00:14:36.863 "max_unmap_latency_ticks": 0, 00:14:36.863 "min_unmap_latency_ticks": 0, 00:14:36.863 "copy_latency_ticks": 0, 00:14:36.863 "max_copy_latency_ticks": 0, 00:14:36.863 "min_copy_latency_ticks": 0, 00:14:36.863 "io_error": {}, 00:14:36.863 "queue_depth_polling_period": 10, 00:14:36.863 "queue_depth": 512, 00:14:36.863 "io_time": 30, 00:14:36.863 "weighted_io_time": 15360 00:14:36.863 } 00:14:36.863 ] 00:14:36.863 }' 00:14:36.863 13:38:16 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:36.863 13:38:16 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:36.863 13:38:16 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:36.863 13:38:16 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:36.863 13:38:16 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:36.863 13:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.863 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:14:36.863 00:14:36.863 Latency(us) 00:14:36.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.863 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:36.863 Malloc_QD : 2.02 57107.59 223.08 0.00 0.00 4472.48 1094.65 8013.14 00:14:36.863 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:36.863 Malloc_QD : 2.02 56796.06 221.86 0.00 0.00 4497.51 697.57 4979.59 00:14:36.863 =================================================================================================================== 00:14:36.863 Total : 113903.65 444.94 0.00 0.00 4484.96 697.57 8013.14 00:14:37.123 0 00:14:37.123 13:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.123 13:38:16 -- bdev/blockdev.sh@552 -- # killprocess 114287 00:14:37.123 13:38:16 -- common/autotest_common.sh@926 -- # '[' -z 114287 ']' 00:14:37.123 13:38:16 -- common/autotest_common.sh@930 -- # kill -0 114287 00:14:37.123 13:38:16 -- common/autotest_common.sh@931 -- # uname 00:14:37.123 13:38:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:37.123 13:38:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114287 00:14:37.123 13:38:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:37.123 13:38:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:37.123 13:38:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114287' 00:14:37.123 killing process with pid 114287 00:14:37.123 13:38:16 -- common/autotest_common.sh@945 -- # kill 114287 00:14:37.123 Received shutdown signal, test time was about 2.178022 seconds 00:14:37.123 00:14:37.123 Latency(us) 00:14:37.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.123 =================================================================================================================== 00:14:37.123 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.123 13:38:16 -- common/autotest_common.sh@950 -- # wait 114287 00:14:38.504 ************************************ 00:14:38.504 END TEST bdev_qd_sampling 00:14:38.504 ************************************ 00:14:38.504 13:38:17 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:38.504 00:14:38.504 real 0m4.747s 00:14:38.504 user 0m8.789s 00:14:38.504 sys 0m0.344s 00:14:38.504 13:38:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.504 13:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:38.504 13:38:17 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:38.504 13:38:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:38.504 13:38:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:38.504 13:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:38.504 ************************************ 00:14:38.504 START TEST bdev_error 00:14:38.504 ************************************ 00:14:38.504 13:38:17 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:14:38.504 13:38:17 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:38.504 13:38:17 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:38.504 13:38:17 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:38.504 13:38:17 -- bdev/blockdev.sh@470 -- # ERR_PID=114387 00:14:38.504 13:38:17 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 114387' 00:14:38.504 13:38:17 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:38.504 Process error testing pid: 114387 00:14:38.504 13:38:17 -- bdev/blockdev.sh@472 -- # waitforlisten 114387 00:14:38.504 13:38:17 -- common/autotest_common.sh@819 -- # '[' -z 114387 ']' 00:14:38.504 13:38:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.504 13:38:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.504 13:38:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.504 13:38:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.504 13:38:17 -- common/autotest_common.sh@10 -- # set +x 00:14:38.504 [2024-07-10 13:38:17.793985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:38.504 [2024-07-10 13:38:17.794191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114387 ] 00:14:38.764 [2024-07-10 13:38:17.949046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.024 [2024-07-10 13:38:18.139074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.283 13:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:39.283 13:38:18 -- common/autotest_common.sh@852 -- # return 0 00:14:39.283 13:38:18 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:39.283 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.283 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.543 Dev_1 00:14:39.543 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.543 13:38:18 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:39.543 13:38:18 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:39.543 13:38:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.543 13:38:18 -- common/autotest_common.sh@889 -- # local i 00:14:39.543 13:38:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.543 13:38:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.543 13:38:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:39.543 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.543 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.543 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.543 13:38:18 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:39.543 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.543 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.543 [ 00:14:39.543 { 00:14:39.543 "name": "Dev_1", 00:14:39.543 "aliases": [ 00:14:39.543 "31226bc0-5a93-407b-bbd2-b094c5a37636" 00:14:39.543 ], 00:14:39.543 "product_name": "Malloc disk", 00:14:39.543 "block_size": 512, 00:14:39.543 "num_blocks": 262144, 00:14:39.543 "uuid": "31226bc0-5a93-407b-bbd2-b094c5a37636", 00:14:39.543 "assigned_rate_limits": { 00:14:39.543 "rw_ios_per_sec": 0, 00:14:39.543 "rw_mbytes_per_sec": 0, 00:14:39.543 "r_mbytes_per_sec": 0, 00:14:39.543 "w_mbytes_per_sec": 0 00:14:39.543 }, 00:14:39.543 "claimed": false, 00:14:39.543 "zoned": false, 00:14:39.543 "supported_io_types": { 00:14:39.543 "read": true, 00:14:39.543 "write": true, 00:14:39.543 "unmap": true, 00:14:39.543 "write_zeroes": true, 00:14:39.543 "flush": true, 00:14:39.543 "reset": true, 00:14:39.543 "compare": false, 00:14:39.543 "compare_and_write": false, 00:14:39.543 "abort": true, 00:14:39.543 "nvme_admin": false, 00:14:39.543 "nvme_io": false 00:14:39.543 }, 00:14:39.543 "memory_domains": [ 00:14:39.543 { 00:14:39.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.543 "dma_device_type": 2 00:14:39.543 } 00:14:39.543 ], 00:14:39.543 "driver_specific": {} 00:14:39.543 } 00:14:39.543 ] 00:14:39.543 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.543 13:38:18 -- common/autotest_common.sh@895 -- # return 0 00:14:39.543 13:38:18 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:39.543 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.543 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.543 true 00:14:39.543 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.543 13:38:18 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:39.543 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.543 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.543 Dev_2 00:14:39.543 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.543 13:38:18 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:39.543 13:38:18 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:39.543 13:38:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.543 13:38:18 -- common/autotest_common.sh@889 -- # local i 00:14:39.543 13:38:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.543 13:38:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.543 13:38:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:39.543 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.543 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.803 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.803 13:38:18 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:39.803 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.803 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.803 [ 00:14:39.803 { 00:14:39.803 "name": "Dev_2", 00:14:39.803 "aliases": [ 00:14:39.803 "90db16bb-2142-47af-903f-90c757a36ac1" 00:14:39.803 ], 00:14:39.803 "product_name": "Malloc disk", 00:14:39.803 "block_size": 512, 00:14:39.803 "num_blocks": 262144, 00:14:39.803 "uuid": "90db16bb-2142-47af-903f-90c757a36ac1", 00:14:39.803 "assigned_rate_limits": { 00:14:39.803 "rw_ios_per_sec": 0, 00:14:39.803 "rw_mbytes_per_sec": 0, 00:14:39.803 "r_mbytes_per_sec": 0, 00:14:39.803 "w_mbytes_per_sec": 0 00:14:39.803 }, 00:14:39.803 "claimed": false, 00:14:39.803 "zoned": false, 00:14:39.803 "supported_io_types": { 00:14:39.803 "read": true, 00:14:39.803 "write": true, 00:14:39.803 "unmap": true, 00:14:39.803 "write_zeroes": true, 00:14:39.803 "flush": true, 00:14:39.803 "reset": true, 00:14:39.804 "compare": false, 00:14:39.804 "compare_and_write": false, 00:14:39.804 "abort": true, 00:14:39.804 "nvme_admin": false, 00:14:39.804 "nvme_io": false 00:14:39.804 }, 00:14:39.804 "memory_domains": [ 00:14:39.804 { 00:14:39.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.804 "dma_device_type": 2 00:14:39.804 } 00:14:39.804 ], 00:14:39.804 "driver_specific": {} 00:14:39.804 } 00:14:39.804 ] 00:14:39.804 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.804 13:38:18 -- common/autotest_common.sh@895 -- # return 0 00:14:39.804 13:38:18 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:39.804 13:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.804 13:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:39.804 13:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.804 13:38:18 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:39.804 13:38:18 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:39.804 Running I/O for 5 seconds... 00:14:40.743 13:38:19 -- bdev/blockdev.sh@485 -- # kill -0 114387 00:14:40.743 13:38:19 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 114387' 00:14:40.743 Process is existed as continue on error is set. Pid: 114387 00:14:40.743 13:38:19 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:40.743 13:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.743 13:38:19 -- common/autotest_common.sh@10 -- # set +x 00:14:40.743 13:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.743 13:38:19 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:40.743 13:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.743 13:38:19 -- common/autotest_common.sh@10 -- # set +x 00:14:40.743 Timeout while waiting for response: 00:14:40.743 00:14:40.743 00:14:41.002 13:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.002 13:38:20 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:45.194 00:14:45.194 Latency(us) 00:14:45.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.195 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:45.195 EE_Dev_1 : 0.93 47840.24 186.88 5.37 0.00 332.02 101.06 529.44 00:14:45.195 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:45.195 Dev_2 : 5.00 98558.70 384.99 0.00 0.00 160.10 47.40 368146.33 00:14:45.195 =================================================================================================================== 00:14:45.195 Total : 146398.94 571.87 5.37 0.00 174.34 47.40 368146.33 00:14:46.133 13:38:25 -- bdev/blockdev.sh@497 -- # killprocess 114387 00:14:46.133 13:38:25 -- common/autotest_common.sh@926 -- # '[' -z 114387 ']' 00:14:46.133 13:38:25 -- common/autotest_common.sh@930 -- # kill -0 114387 00:14:46.133 13:38:25 -- common/autotest_common.sh@931 -- # uname 00:14:46.134 13:38:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:46.134 13:38:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114387 00:14:46.134 13:38:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:46.134 13:38:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:46.134 killing process with pid 114387 00:14:46.134 13:38:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114387' 00:14:46.134 13:38:25 -- common/autotest_common.sh@945 -- # kill 114387 00:14:46.134 Received shutdown signal, test time was about 5.000000 seconds 00:14:46.134 00:14:46.134 Latency(us) 00:14:46.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.134 =================================================================================================================== 00:14:46.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.134 13:38:25 -- common/autotest_common.sh@950 -- # wait 114387 00:14:48.037 Process error testing pid: 114524 00:14:48.037 13:38:26 -- bdev/blockdev.sh@501 -- # ERR_PID=114524 00:14:48.037 13:38:26 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:48.037 13:38:26 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 114524' 00:14:48.037 13:38:26 -- bdev/blockdev.sh@503 -- # waitforlisten 114524 00:14:48.037 13:38:26 -- common/autotest_common.sh@819 -- # '[' -z 114524 ']' 00:14:48.037 13:38:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.037 13:38:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:48.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.037 13:38:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.037 13:38:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:48.037 13:38:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.037 [2024-07-10 13:38:27.021828] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:48.037 [2024-07-10 13:38:27.022003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114524 ] 00:14:48.037 [2024-07-10 13:38:27.167585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.037 [2024-07-10 13:38:27.358826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.606 13:38:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:48.606 13:38:27 -- common/autotest_common.sh@852 -- # return 0 00:14:48.606 13:38:27 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:48.606 13:38:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.606 13:38:27 -- common/autotest_common.sh@10 -- # set +x 00:14:48.865 Dev_1 00:14:48.865 13:38:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.865 13:38:27 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:48.865 13:38:27 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:48.865 13:38:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:48.865 13:38:27 -- common/autotest_common.sh@889 -- # local i 00:14:48.865 13:38:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:48.865 13:38:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:48.865 13:38:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:48.865 13:38:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.865 13:38:27 -- common/autotest_common.sh@10 -- # set +x 00:14:48.865 13:38:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.865 13:38:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:48.865 13:38:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.865 13:38:27 -- common/autotest_common.sh@10 -- # set +x 00:14:48.865 [ 00:14:48.865 { 00:14:48.865 "name": "Dev_1", 00:14:48.865 "aliases": [ 00:14:48.865 "8c383d44-b0cc-495e-b7ed-2d254297fb42" 00:14:48.865 ], 00:14:48.865 "product_name": "Malloc disk", 00:14:48.865 "block_size": 512, 00:14:48.865 "num_blocks": 262144, 00:14:48.865 "uuid": "8c383d44-b0cc-495e-b7ed-2d254297fb42", 00:14:48.865 "assigned_rate_limits": { 00:14:48.865 "rw_ios_per_sec": 0, 00:14:48.865 "rw_mbytes_per_sec": 0, 00:14:48.865 "r_mbytes_per_sec": 0, 00:14:48.865 "w_mbytes_per_sec": 0 00:14:48.865 }, 00:14:48.865 "claimed": false, 00:14:48.865 "zoned": false, 00:14:48.865 "supported_io_types": { 00:14:48.865 "read": true, 00:14:48.865 "write": true, 00:14:48.865 "unmap": true, 00:14:48.865 "write_zeroes": true, 00:14:48.865 "flush": true, 00:14:48.865 "reset": true, 00:14:48.865 "compare": false, 00:14:48.865 "compare_and_write": false, 00:14:48.865 "abort": true, 00:14:48.865 "nvme_admin": false, 00:14:48.865 "nvme_io": false 00:14:48.866 }, 00:14:48.866 "memory_domains": [ 00:14:48.866 { 00:14:48.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.866 "dma_device_type": 2 00:14:48.866 } 00:14:48.866 ], 00:14:48.866 "driver_specific": {} 00:14:48.866 } 00:14:48.866 ] 00:14:48.866 13:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.866 13:38:28 -- common/autotest_common.sh@895 -- # return 0 00:14:48.866 13:38:28 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:48.866 13:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.866 13:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.866 true 00:14:48.866 13:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.866 13:38:28 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:48.866 13:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.866 13:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.866 Dev_2 00:14:48.866 13:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.866 13:38:28 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:48.866 13:38:28 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:48.866 13:38:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:48.866 13:38:28 -- common/autotest_common.sh@889 -- # local i 00:14:48.866 13:38:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:48.866 13:38:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:48.866 13:38:28 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:48.866 13:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.866 13:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.866 13:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.866 13:38:28 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:48.866 13:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.866 13:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.866 [ 00:14:48.866 { 00:14:48.866 "name": "Dev_2", 00:14:48.866 "aliases": [ 00:14:48.866 "5fe0b905-18fb-4094-a40c-76f8f31170c3" 00:14:48.866 ], 00:14:48.866 "product_name": "Malloc disk", 00:14:48.866 "block_size": 512, 00:14:48.866 "num_blocks": 262144, 00:14:48.866 "uuid": "5fe0b905-18fb-4094-a40c-76f8f31170c3", 00:14:48.866 "assigned_rate_limits": { 00:14:48.866 "rw_ios_per_sec": 0, 00:14:48.866 "rw_mbytes_per_sec": 0, 00:14:48.866 "r_mbytes_per_sec": 0, 00:14:48.866 "w_mbytes_per_sec": 0 00:14:48.866 }, 00:14:48.866 "claimed": false, 00:14:48.866 "zoned": false, 00:14:48.866 "supported_io_types": { 00:14:48.866 "read": true, 00:14:48.866 "write": true, 00:14:48.866 "unmap": true, 00:14:48.866 "write_zeroes": true, 00:14:48.866 "flush": true, 00:14:48.866 "reset": true, 00:14:48.866 "compare": false, 00:14:48.866 "compare_and_write": false, 00:14:48.866 "abort": true, 00:14:48.866 "nvme_admin": false, 00:14:48.866 "nvme_io": false 00:14:48.866 }, 00:14:48.866 "memory_domains": [ 00:14:48.866 { 00:14:48.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.866 "dma_device_type": 2 00:14:48.866 } 00:14:48.866 ], 00:14:48.866 "driver_specific": {} 00:14:48.866 } 00:14:48.866 ] 00:14:48.866 13:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.866 13:38:28 -- common/autotest_common.sh@895 -- # return 0 00:14:48.866 13:38:28 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:48.866 13:38:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.866 13:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.866 13:38:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.866 13:38:28 -- bdev/blockdev.sh@513 -- # NOT wait 114524 00:14:48.866 13:38:28 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:48.866 13:38:28 -- common/autotest_common.sh@640 -- # local es=0 00:14:48.866 13:38:28 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 114524 00:14:48.866 13:38:28 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:48.866 13:38:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.866 13:38:28 -- common/autotest_common.sh@632 -- # type -t wait 00:14:48.866 13:38:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.866 13:38:28 -- common/autotest_common.sh@643 -- # wait 114524 00:14:49.126 Running I/O for 5 seconds... 00:14:49.126 task offset: 18848 on job bdev=EE_Dev_1 fails 00:14:49.126 00:14:49.126 Latency(us) 00:14:49.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.126 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:49.126 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:49.126 EE_Dev_1 : 0.00 34700.32 135.55 7886.44 0.00 310.61 116.26 558.06 00:14:49.126 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:49.126 Dev_2 : 0.00 23460.41 91.64 0.00 0.00 483.37 108.66 890.75 00:14:49.126 =================================================================================================================== 00:14:49.126 Total : 58160.73 227.19 7886.44 0.00 404.31 108.66 890.75 00:14:49.126 [2024-07-10 13:38:28.277774] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:49.126 request: 00:14:49.126 { 00:14:49.126 "method": "perform_tests", 00:14:49.126 "req_id": 1 00:14:49.126 } 00:14:49.126 Got JSON-RPC error response 00:14:49.126 response: 00:14:49.126 { 00:14:49.126 "code": -32603, 00:14:49.126 "message": "bdevperf failed with error Operation not permitted" 00:14:49.126 } 00:14:51.098 ************************************ 00:14:51.098 END TEST bdev_error 00:14:51.098 ************************************ 00:14:51.098 13:38:30 -- common/autotest_common.sh@643 -- # es=255 00:14:51.098 13:38:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:51.098 13:38:30 -- common/autotest_common.sh@652 -- # es=127 00:14:51.098 13:38:30 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:51.098 13:38:30 -- common/autotest_common.sh@660 -- # es=1 00:14:51.098 13:38:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:51.098 00:14:51.098 real 0m12.454s 00:14:51.098 user 0m12.486s 00:14:51.098 sys 0m0.714s 00:14:51.098 13:38:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.098 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:14:51.098 13:38:30 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:51.098 13:38:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:51.098 13:38:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.098 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:14:51.098 ************************************ 00:14:51.098 START TEST bdev_stat 00:14:51.098 ************************************ 00:14:51.098 13:38:30 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:51.098 13:38:30 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:51.098 13:38:30 -- bdev/blockdev.sh@594 -- # STAT_PID=114587 00:14:51.098 13:38:30 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 114587' 00:14:51.098 Process Bdev IO statistics testing pid: 114587 00:14:51.098 13:38:30 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:51.098 13:38:30 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:51.098 13:38:30 -- bdev/blockdev.sh@597 -- # waitforlisten 114587 00:14:51.098 13:38:30 -- common/autotest_common.sh@819 -- # '[' -z 114587 ']' 00:14:51.098 13:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.098 13:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:51.098 13:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.098 13:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:51.098 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:14:51.098 [2024-07-10 13:38:30.320153] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:51.098 [2024-07-10 13:38:30.320293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114587 ] 00:14:51.358 [2024-07-10 13:38:30.477782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:51.358 [2024-07-10 13:38:30.690936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.358 [2024-07-10 13:38:30.690946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.927 13:38:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:51.927 13:38:31 -- common/autotest_common.sh@852 -- # return 0 00:14:51.927 13:38:31 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:51.927 13:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.927 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.187 Malloc_STAT 00:14:52.187 13:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.187 13:38:31 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:52.187 13:38:31 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:52.187 13:38:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.187 13:38:31 -- common/autotest_common.sh@889 -- # local i 00:14:52.187 13:38:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.187 13:38:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.187 13:38:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:52.187 13:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.187 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.187 13:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.187 13:38:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:52.187 13:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.187 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.187 [ 00:14:52.187 { 00:14:52.187 "name": "Malloc_STAT", 00:14:52.187 "aliases": [ 00:14:52.187 "f83da401-42b0-478d-99aa-59856a723360" 00:14:52.187 ], 00:14:52.187 "product_name": "Malloc disk", 00:14:52.187 "block_size": 512, 00:14:52.187 "num_blocks": 262144, 00:14:52.187 "uuid": "f83da401-42b0-478d-99aa-59856a723360", 00:14:52.187 "assigned_rate_limits": { 00:14:52.187 "rw_ios_per_sec": 0, 00:14:52.187 "rw_mbytes_per_sec": 0, 00:14:52.187 "r_mbytes_per_sec": 0, 00:14:52.187 "w_mbytes_per_sec": 0 00:14:52.187 }, 00:14:52.187 "claimed": false, 00:14:52.187 "zoned": false, 00:14:52.187 "supported_io_types": { 00:14:52.187 "read": true, 00:14:52.187 "write": true, 00:14:52.187 "unmap": true, 00:14:52.187 "write_zeroes": true, 00:14:52.187 "flush": true, 00:14:52.187 "reset": true, 00:14:52.187 "compare": false, 00:14:52.187 "compare_and_write": false, 00:14:52.187 "abort": true, 00:14:52.187 "nvme_admin": false, 00:14:52.187 "nvme_io": false 00:14:52.187 }, 00:14:52.187 "memory_domains": [ 00:14:52.187 { 00:14:52.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.187 "dma_device_type": 2 00:14:52.187 } 00:14:52.187 ], 00:14:52.187 "driver_specific": {} 00:14:52.187 } 00:14:52.187 ] 00:14:52.187 13:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.187 13:38:31 -- common/autotest_common.sh@895 -- # return 0 00:14:52.187 13:38:31 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:52.187 13:38:31 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:52.187 Running I/O for 10 seconds... 00:14:54.144 13:38:33 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:54.144 13:38:33 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:54.144 13:38:33 -- bdev/blockdev.sh@558 -- # local iostats 00:14:54.144 13:38:33 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:54.144 13:38:33 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:54.144 13:38:33 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:54.144 13:38:33 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:54.144 13:38:33 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:54.144 13:38:33 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:54.144 13:38:33 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:54.144 13:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.144 13:38:33 -- common/autotest_common.sh@10 -- # set +x 00:14:54.144 13:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.144 13:38:33 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:54.144 "tick_rate": 2290000000, 00:14:54.144 "ticks": 1903728684900, 00:14:54.144 "bdevs": [ 00:14:54.144 { 00:14:54.144 "name": "Malloc_STAT", 00:14:54.144 "bytes_read": 923832832, 00:14:54.144 "num_read_ops": 225539, 00:14:54.144 "bytes_written": 0, 00:14:54.144 "num_write_ops": 0, 00:14:54.144 "bytes_unmapped": 0, 00:14:54.144 "num_unmap_ops": 0, 00:14:54.144 "bytes_copied": 0, 00:14:54.144 "num_copy_ops": 0, 00:14:54.144 "read_latency_ticks": 2239191226050, 00:14:54.144 "max_read_latency_ticks": 12560526, 00:14:54.144 "min_read_latency_ticks": 449936, 00:14:54.144 "write_latency_ticks": 0, 00:14:54.144 "max_write_latency_ticks": 0, 00:14:54.144 "min_write_latency_ticks": 0, 00:14:54.144 "unmap_latency_ticks": 0, 00:14:54.144 "max_unmap_latency_ticks": 0, 00:14:54.144 "min_unmap_latency_ticks": 0, 00:14:54.144 "copy_latency_ticks": 0, 00:14:54.144 "max_copy_latency_ticks": 0, 00:14:54.144 "min_copy_latency_ticks": 0, 00:14:54.144 "io_error": {} 00:14:54.144 } 00:14:54.144 ] 00:14:54.144 }' 00:14:54.144 13:38:33 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:54.144 13:38:33 -- bdev/blockdev.sh@567 -- # io_count1=225539 00:14:54.144 13:38:33 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:54.144 13:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.144 13:38:33 -- common/autotest_common.sh@10 -- # set +x 00:14:54.144 13:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.144 13:38:33 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:54.144 "tick_rate": 2290000000, 00:14:54.144 "ticks": 1903887711244, 00:14:54.144 "name": "Malloc_STAT", 00:14:54.144 "channels": [ 00:14:54.144 { 00:14:54.144 "thread_id": 2, 00:14:54.144 "bytes_read": 485490688, 00:14:54.144 "num_read_ops": 118528, 00:14:54.144 "bytes_written": 0, 00:14:54.144 "num_write_ops": 0, 00:14:54.144 "bytes_unmapped": 0, 00:14:54.144 "num_unmap_ops": 0, 00:14:54.144 "bytes_copied": 0, 00:14:54.144 "num_copy_ops": 0, 00:14:54.144 "read_latency_ticks": 1159314746304, 00:14:54.144 "max_read_latency_ticks": 11992106, 00:14:54.144 "min_read_latency_ticks": 7932306, 00:14:54.144 "write_latency_ticks": 0, 00:14:54.144 "max_write_latency_ticks": 0, 00:14:54.144 "min_write_latency_ticks": 0, 00:14:54.144 "unmap_latency_ticks": 0, 00:14:54.144 "max_unmap_latency_ticks": 0, 00:14:54.144 "min_unmap_latency_ticks": 0, 00:14:54.144 "copy_latency_ticks": 0, 00:14:54.144 "max_copy_latency_ticks": 0, 00:14:54.144 "min_copy_latency_ticks": 0 00:14:54.144 }, 00:14:54.144 { 00:14:54.144 "thread_id": 3, 00:14:54.144 "bytes_read": 471859200, 00:14:54.144 "num_read_ops": 115200, 00:14:54.144 "bytes_written": 0, 00:14:54.144 "num_write_ops": 0, 00:14:54.144 "bytes_unmapped": 0, 00:14:54.144 "num_unmap_ops": 0, 00:14:54.144 "bytes_copied": 0, 00:14:54.144 "num_copy_ops": 0, 00:14:54.144 "read_latency_ticks": 1160478110310, 00:14:54.144 "max_read_latency_ticks": 12560526, 00:14:54.144 "min_read_latency_ticks": 7124152, 00:14:54.144 "write_latency_ticks": 0, 00:14:54.144 "max_write_latency_ticks": 0, 00:14:54.144 "min_write_latency_ticks": 0, 00:14:54.144 "unmap_latency_ticks": 0, 00:14:54.144 "max_unmap_latency_ticks": 0, 00:14:54.144 "min_unmap_latency_ticks": 0, 00:14:54.144 "copy_latency_ticks": 0, 00:14:54.144 "max_copy_latency_ticks": 0, 00:14:54.144 "min_copy_latency_ticks": 0 00:14:54.144 } 00:14:54.144 ] 00:14:54.144 }' 00:14:54.144 13:38:33 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:54.418 13:38:33 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=118528 00:14:54.418 13:38:33 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=118528 00:14:54.418 13:38:33 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:54.418 13:38:33 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=115200 00:14:54.418 13:38:33 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=233728 00:14:54.418 13:38:33 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:54.418 13:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.418 13:38:33 -- common/autotest_common.sh@10 -- # set +x 00:14:54.418 13:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.418 13:38:33 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:54.418 "tick_rate": 2290000000, 00:14:54.418 "ticks": 1904177370510, 00:14:54.418 "bdevs": [ 00:14:54.418 { 00:14:54.418 "name": "Malloc_STAT", 00:14:54.418 "bytes_read": 1018204672, 00:14:54.418 "num_read_ops": 248579, 00:14:54.418 "bytes_written": 0, 00:14:54.418 "num_write_ops": 0, 00:14:54.418 "bytes_unmapped": 0, 00:14:54.418 "num_unmap_ops": 0, 00:14:54.418 "bytes_copied": 0, 00:14:54.418 "num_copy_ops": 0, 00:14:54.418 "read_latency_ticks": 2468297389488, 00:14:54.418 "max_read_latency_ticks": 12628780, 00:14:54.418 "min_read_latency_ticks": 449936, 00:14:54.418 "write_latency_ticks": 0, 00:14:54.418 "max_write_latency_ticks": 0, 00:14:54.418 "min_write_latency_ticks": 0, 00:14:54.418 "unmap_latency_ticks": 0, 00:14:54.418 "max_unmap_latency_ticks": 0, 00:14:54.418 "min_unmap_latency_ticks": 0, 00:14:54.418 "copy_latency_ticks": 0, 00:14:54.418 "max_copy_latency_ticks": 0, 00:14:54.418 "min_copy_latency_ticks": 0, 00:14:54.418 "io_error": {} 00:14:54.418 } 00:14:54.418 ] 00:14:54.418 }' 00:14:54.418 13:38:33 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:54.418 13:38:33 -- bdev/blockdev.sh@576 -- # io_count2=248579 00:14:54.418 13:38:33 -- bdev/blockdev.sh@581 -- # '[' 233728 -lt 225539 ']' 00:14:54.418 13:38:33 -- bdev/blockdev.sh@581 -- # '[' 233728 -gt 248579 ']' 00:14:54.418 13:38:33 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:54.418 13:38:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.418 13:38:33 -- common/autotest_common.sh@10 -- # set +x 00:14:54.418 00:14:54.418 Latency(us) 00:14:54.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.418 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:54.418 Malloc_STAT : 2.18 59665.00 233.07 0.00 0.00 4281.19 1015.95 5637.81 00:14:54.418 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:54.418 Malloc_STAT : 2.19 58207.64 227.37 0.00 0.00 4388.37 711.88 5494.72 00:14:54.418 =================================================================================================================== 00:14:54.418 Total : 117872.64 460.44 0.00 0.00 4334.14 711.88 5637.81 00:14:54.418 0 00:14:54.418 13:38:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.418 13:38:33 -- bdev/blockdev.sh@607 -- # killprocess 114587 00:14:54.418 13:38:33 -- common/autotest_common.sh@926 -- # '[' -z 114587 ']' 00:14:54.418 13:38:33 -- common/autotest_common.sh@930 -- # kill -0 114587 00:14:54.418 13:38:33 -- common/autotest_common.sh@931 -- # uname 00:14:54.418 13:38:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:54.418 13:38:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114587 00:14:54.678 killing process with pid 114587 00:14:54.678 Received shutdown signal, test time was about 2.343075 seconds 00:14:54.678 00:14:54.678 Latency(us) 00:14:54.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.678 =================================================================================================================== 00:14:54.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.678 13:38:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:54.678 13:38:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:54.678 13:38:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114587' 00:14:54.678 13:38:33 -- common/autotest_common.sh@945 -- # kill 114587 00:14:54.678 13:38:33 -- common/autotest_common.sh@950 -- # wait 114587 00:14:56.059 ************************************ 00:14:56.059 END TEST bdev_stat 00:14:56.059 ************************************ 00:14:56.059 13:38:35 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:56.059 00:14:56.059 real 0m4.971s 00:14:56.059 user 0m9.409s 00:14:56.059 sys 0m0.364s 00:14:56.059 13:38:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.059 13:38:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.059 13:38:35 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:56.059 13:38:35 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:56.059 13:38:35 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:56.059 13:38:35 -- bdev/blockdev.sh@809 -- # cleanup 00:14:56.059 13:38:35 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:56.059 13:38:35 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:56.059 13:38:35 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:56.059 13:38:35 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:56.059 13:38:35 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:56.059 13:38:35 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:56.059 00:14:56.059 real 2m33.890s 00:14:56.059 user 6m10.655s 00:14:56.059 sys 0m21.034s 00:14:56.059 13:38:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.059 13:38:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.059 ************************************ 00:14:56.059 END TEST blockdev_general 00:14:56.059 ************************************ 00:14:56.059 13:38:35 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:56.059 13:38:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:56.059 13:38:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.059 13:38:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.059 ************************************ 00:14:56.059 START TEST bdev_raid 00:14:56.059 ************************************ 00:14:56.059 13:38:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:56.319 * Looking for test storage... 00:14:56.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:56.319 13:38:35 -- bdev/nbd_common.sh@6 -- # set -e 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:56.319 13:38:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:56.319 13:38:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.319 13:38:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.319 ************************************ 00:14:56.319 START TEST raid_function_test_raid0 00:14:56.319 ************************************ 00:14:56.319 13:38:35 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:56.319 13:38:35 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:56.320 13:38:35 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:56.320 13:38:35 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:56.320 13:38:35 -- bdev/bdev_raid.sh@86 -- # raid_pid=114759 00:14:56.320 13:38:35 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114759' 00:14:56.320 Process raid pid: 114759 00:14:56.320 13:38:35 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:56.320 13:38:35 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114759 /var/tmp/spdk-raid.sock 00:14:56.320 13:38:35 -- common/autotest_common.sh@819 -- # '[' -z 114759 ']' 00:14:56.320 13:38:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.320 13:38:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.320 13:38:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.320 13:38:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.320 13:38:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.320 [2024-07-10 13:38:35.569116] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:56.320 [2024-07-10 13:38:35.569245] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.579 [2024-07-10 13:38:35.721189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.579 [2024-07-10 13:38:35.919964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.839 [2024-07-10 13:38:36.114713] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.099 13:38:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.099 13:38:36 -- common/autotest_common.sh@852 -- # return 0 00:14:57.099 13:38:36 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:57.099 13:38:36 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:57.099 13:38:36 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:57.099 13:38:36 -- bdev/bdev_raid.sh@70 -- # cat 00:14:57.099 13:38:36 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:57.358 [2024-07-10 13:38:36.668749] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:57.358 [2024-07-10 13:38:36.670535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:57.358 [2024-07-10 13:38:36.670624] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:57.358 [2024-07-10 13:38:36.670634] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:57.358 [2024-07-10 13:38:36.670789] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:57.358 [2024-07-10 13:38:36.671090] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:57.358 [2024-07-10 13:38:36.671107] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:57.358 [2024-07-10 13:38:36.671268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.358 Base_1 00:14:57.358 Base_2 00:14:57.358 13:38:36 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:57.358 13:38:36 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:57.358 13:38:36 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.618 13:38:36 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:57.618 13:38:36 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:57.618 13:38:36 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@12 -- # local i 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.618 13:38:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:57.878 [2024-07-10 13:38:37.056064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:57.879 /dev/nbd0 00:14:57.879 13:38:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.879 13:38:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.879 13:38:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:57.879 13:38:37 -- common/autotest_common.sh@857 -- # local i 00:14:57.879 13:38:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:57.879 13:38:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:57.879 13:38:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:57.879 13:38:37 -- common/autotest_common.sh@861 -- # break 00:14:57.879 13:38:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:57.879 13:38:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:57.879 13:38:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.879 1+0 records in 00:14:57.879 1+0 records out 00:14:57.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521469 s, 7.9 MB/s 00:14:57.879 13:38:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.879 13:38:37 -- common/autotest_common.sh@874 -- # size=4096 00:14:57.879 13:38:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.879 13:38:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:57.879 13:38:37 -- common/autotest_common.sh@877 -- # return 0 00:14:57.879 13:38:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.879 13:38:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.879 13:38:37 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:57.879 13:38:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:57.879 13:38:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:58.139 { 00:14:58.139 "nbd_device": "/dev/nbd0", 00:14:58.139 "bdev_name": "raid" 00:14:58.139 } 00:14:58.139 ]' 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:58.139 { 00:14:58.139 "nbd_device": "/dev/nbd0", 00:14:58.139 "bdev_name": "raid" 00:14:58.139 } 00:14:58.139 ]' 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@65 -- # count=1 00:14:58.139 13:38:37 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:58.139 4096+0 records in 00:14:58.139 4096+0 records out 00:14:58.139 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0285045 s, 73.6 MB/s 00:14:58.139 13:38:37 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:58.399 4096+0 records in 00:14:58.399 4096+0 records out 00:14:58.399 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.171087 s, 12.3 MB/s 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:58.399 128+0 records in 00:14:58.399 128+0 records out 00:14:58.399 65536 bytes (66 kB, 64 KiB) copied, 0.000657959 s, 99.6 MB/s 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:58.399 2035+0 records in 00:14:58.399 2035+0 records out 00:14:58.399 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00808022 s, 129 MB/s 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:58.399 456+0 records in 00:14:58.399 456+0 records out 00:14:58.399 233472 bytes (233 kB, 228 KiB) copied, 0.0017238 s, 135 MB/s 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:58.399 13:38:37 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:58.399 13:38:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:58.399 13:38:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:58.399 13:38:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.399 13:38:37 -- bdev/nbd_common.sh@51 -- # local i 00:14:58.399 13:38:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.399 13:38:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.658 [2024-07-10 13:38:37.867960] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@41 -- # break 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.658 13:38:37 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:58.658 13:38:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:58.917 13:38:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:58.917 13:38:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:58.917 13:38:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:58.918 13:38:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:58.918 13:38:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:58.918 13:38:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:58.918 13:38:38 -- bdev/nbd_common.sh@65 -- # true 00:14:58.918 13:38:38 -- bdev/nbd_common.sh@65 -- # count=0 00:14:58.918 13:38:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:58.918 13:38:38 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:58.918 13:38:38 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:58.918 13:38:38 -- bdev/bdev_raid.sh@111 -- # killprocess 114759 00:14:58.918 13:38:38 -- common/autotest_common.sh@926 -- # '[' -z 114759 ']' 00:14:58.918 13:38:38 -- common/autotest_common.sh@930 -- # kill -0 114759 00:14:58.918 13:38:38 -- common/autotest_common.sh@931 -- # uname 00:14:58.918 13:38:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.918 13:38:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114759 00:14:58.918 killing process with pid 114759 00:14:58.918 13:38:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:58.918 13:38:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:58.918 13:38:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114759' 00:14:58.918 13:38:38 -- common/autotest_common.sh@945 -- # kill 114759 00:14:58.918 [2024-07-10 13:38:38.183264] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.918 13:38:38 -- common/autotest_common.sh@950 -- # wait 114759 00:14:58.918 [2024-07-10 13:38:38.183370] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.918 [2024-07-10 13:38:38.183435] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.918 [2024-07-10 13:38:38.183443] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:59.176 [2024-07-10 13:38:38.374796] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:00.554 00:15:00.554 real 0m4.099s 00:15:00.554 user 0m5.038s 00:15:00.554 sys 0m0.829s 00:15:00.554 13:38:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.554 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:15:00.554 ************************************ 00:15:00.554 END TEST raid_function_test_raid0 00:15:00.554 ************************************ 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:15:00.554 13:38:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.554 13:38:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.554 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:15:00.554 ************************************ 00:15:00.554 START TEST raid_function_test_concat 00:15:00.554 ************************************ 00:15:00.554 13:38:39 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@86 -- # raid_pid=114914 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114914' 00:15:00.554 Process raid pid: 114914 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114914 /var/tmp/spdk-raid.sock 00:15:00.554 13:38:39 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:00.554 13:38:39 -- common/autotest_common.sh@819 -- # '[' -z 114914 ']' 00:15:00.554 13:38:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.554 13:38:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.554 13:38:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.554 13:38:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.554 13:38:39 -- common/autotest_common.sh@10 -- # set +x 00:15:00.554 [2024-07-10 13:38:39.725769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:00.554 [2024-07-10 13:38:39.725897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.554 [2024-07-10 13:38:39.880056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.813 [2024-07-10 13:38:40.061919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.072 [2024-07-10 13:38:40.257264] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.332 13:38:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.332 13:38:40 -- common/autotest_common.sh@852 -- # return 0 00:15:01.332 13:38:40 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:15:01.332 13:38:40 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:15:01.332 13:38:40 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:01.332 13:38:40 -- bdev/bdev_raid.sh@70 -- # cat 00:15:01.332 13:38:40 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:01.591 [2024-07-10 13:38:40.814681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:01.591 [2024-07-10 13:38:40.816281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:01.591 [2024-07-10 13:38:40.816371] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:01.591 [2024-07-10 13:38:40.816379] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:01.591 [2024-07-10 13:38:40.816496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:01.591 [2024-07-10 13:38:40.816769] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:01.591 [2024-07-10 13:38:40.816787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:15:01.591 [2024-07-10 13:38:40.816925] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.591 Base_1 00:15:01.592 Base_2 00:15:01.592 13:38:40 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:01.592 13:38:40 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:01.592 13:38:40 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:15:01.852 13:38:41 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:15:01.852 13:38:41 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:15:01.852 13:38:41 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@12 -- # local i 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:01.852 [2024-07-10 13:38:41.166058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:01.852 /dev/nbd0 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.852 13:38:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.852 13:38:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:01.852 13:38:41 -- common/autotest_common.sh@857 -- # local i 00:15:01.852 13:38:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:01.852 13:38:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:01.852 13:38:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:01.852 13:38:41 -- common/autotest_common.sh@861 -- # break 00:15:01.852 13:38:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:01.852 13:38:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:01.852 13:38:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.112 1+0 records in 00:15:02.112 1+0 records out 00:15:02.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313503 s, 13.1 MB/s 00:15:02.112 13:38:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.112 13:38:41 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.112 13:38:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.112 13:38:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.112 13:38:41 -- common/autotest_common.sh@877 -- # return 0 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:02.112 { 00:15:02.112 "nbd_device": "/dev/nbd0", 00:15:02.112 "bdev_name": "raid" 00:15:02.112 } 00:15:02.112 ]' 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:02.112 { 00:15:02.112 "nbd_device": "/dev/nbd0", 00:15:02.112 "bdev_name": "raid" 00:15:02.112 } 00:15:02.112 ]' 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@65 -- # count=1 00:15:02.112 13:38:41 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@98 -- # count=1 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@20 -- # local blksize 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:15:02.112 13:38:41 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:15:02.373 4096+0 records in 00:15:02.373 4096+0 records out 00:15:02.373 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0299105 s, 70.1 MB/s 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:02.373 4096+0 records in 00:15:02.373 4096+0 records out 00:15:02.373 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.167108 s, 12.5 MB/s 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:02.373 128+0 records in 00:15:02.373 128+0 records out 00:15:02.373 65536 bytes (66 kB, 64 KiB) copied, 0.000669427 s, 97.9 MB/s 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:15:02.373 13:38:41 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:02.633 2035+0 records in 00:15:02.633 2035+0 records out 00:15:02.633 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00729734 s, 143 MB/s 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:02.633 456+0 records in 00:15:02.633 456+0 records out 00:15:02.633 233472 bytes (233 kB, 228 KiB) copied, 0.0017891 s, 130 MB/s 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@53 -- # return 0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@51 -- # local i 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.633 [2024-07-10 13:38:41.971992] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@41 -- # break 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.633 13:38:41 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:02.633 13:38:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@65 -- # true 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@65 -- # count=0 00:15:02.893 13:38:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:02.893 13:38:42 -- bdev/bdev_raid.sh@106 -- # count=0 00:15:02.893 13:38:42 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:15:02.893 13:38:42 -- bdev/bdev_raid.sh@111 -- # killprocess 114914 00:15:02.893 13:38:42 -- common/autotest_common.sh@926 -- # '[' -z 114914 ']' 00:15:02.893 13:38:42 -- common/autotest_common.sh@930 -- # kill -0 114914 00:15:02.893 13:38:42 -- common/autotest_common.sh@931 -- # uname 00:15:02.893 13:38:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.893 13:38:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114914 00:15:03.150 13:38:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:03.150 13:38:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:03.150 13:38:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114914' 00:15:03.150 killing process with pid 114914 00:15:03.150 13:38:42 -- common/autotest_common.sh@945 -- # kill 114914 00:15:03.150 [2024-07-10 13:38:42.255520] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.150 13:38:42 -- common/autotest_common.sh@950 -- # wait 114914 00:15:03.150 [2024-07-10 13:38:42.255619] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.150 [2024-07-10 13:38:42.255666] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.150 [2024-07-10 13:38:42.255675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:15:03.150 [2024-07-10 13:38:42.446774] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.527 ************************************ 00:15:04.527 END TEST raid_function_test_concat 00:15:04.527 ************************************ 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:04.527 00:15:04.527 real 0m4.041s 00:15:04.527 user 0m4.942s 00:15:04.527 sys 0m0.795s 00:15:04.527 13:38:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.527 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:15:04.527 13:38:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:04.527 13:38:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.527 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:15:04.527 ************************************ 00:15:04.527 START TEST raid0_resize_test 00:15:04.527 ************************************ 00:15:04.527 13:38:43 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@301 -- # raid_pid=115086 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 115086' 00:15:04.527 Process raid pid: 115086 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:04.527 13:38:43 -- bdev/bdev_raid.sh@303 -- # waitforlisten 115086 /var/tmp/spdk-raid.sock 00:15:04.527 13:38:43 -- common/autotest_common.sh@819 -- # '[' -z 115086 ']' 00:15:04.527 13:38:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:04.527 13:38:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:04.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:04.527 13:38:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:04.527 13:38:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:04.527 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:15:04.527 [2024-07-10 13:38:43.832706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.527 [2024-07-10 13:38:43.832853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.793 [2024-07-10 13:38:43.988220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.062 [2024-07-10 13:38:44.183686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.062 [2024-07-10 13:38:44.373986] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.321 13:38:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:05.321 13:38:44 -- common/autotest_common.sh@852 -- # return 0 00:15:05.321 13:38:44 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:05.580 Base_1 00:15:05.580 13:38:44 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:05.839 Base_2 00:15:05.839 13:38:44 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:15:05.839 [2024-07-10 13:38:45.181923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:05.839 [2024-07-10 13:38:45.183551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:05.839 [2024-07-10 13:38:45.183611] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:05.839 [2024-07-10 13:38:45.183619] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:05.839 [2024-07-10 13:38:45.183773] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:15:05.839 [2024-07-10 13:38:45.184031] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:05.839 [2024-07-10 13:38:45.184046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:15:05.839 [2024-07-10 13:38:45.184232] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.839 13:38:45 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:06.097 [2024-07-10 13:38:45.353609] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:06.097 [2024-07-10 13:38:45.353637] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:06.097 true 00:15:06.097 13:38:45 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:06.097 13:38:45 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:15:06.356 [2024-07-10 13:38:45.525415] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.356 13:38:45 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:15:06.356 13:38:45 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:15:06.356 13:38:45 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:15:06.356 13:38:45 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:06.616 [2024-07-10 13:38:45.716924] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:06.616 [2024-07-10 13:38:45.716959] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:06.616 [2024-07-10 13:38:45.717000] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:15:06.616 [2024-07-10 13:38:45.717061] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:06.616 true 00:15:06.616 13:38:45 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:06.616 13:38:45 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:15:06.616 [2024-07-10 13:38:45.912718] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.616 13:38:45 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:15:06.616 13:38:45 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:15:06.616 13:38:45 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:15:06.616 13:38:45 -- bdev/bdev_raid.sh@332 -- # killprocess 115086 00:15:06.616 13:38:45 -- common/autotest_common.sh@926 -- # '[' -z 115086 ']' 00:15:06.616 13:38:45 -- common/autotest_common.sh@930 -- # kill -0 115086 00:15:06.616 13:38:45 -- common/autotest_common.sh@931 -- # uname 00:15:06.616 13:38:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.616 13:38:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115086 00:15:06.616 killing process with pid 115086 00:15:06.616 13:38:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.616 13:38:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.616 13:38:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115086' 00:15:06.616 13:38:45 -- common/autotest_common.sh@945 -- # kill 115086 00:15:06.616 [2024-07-10 13:38:45.955779] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.616 13:38:45 -- common/autotest_common.sh@950 -- # wait 115086 00:15:06.616 [2024-07-10 13:38:45.955871] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.616 [2024-07-10 13:38:45.955928] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.616 [2024-07-10 13:38:45.955936] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:15:06.616 [2024-07-10 13:38:45.956511] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@334 -- # return 0 00:15:07.995 00:15:07.995 real 0m3.418s 00:15:07.995 user 0m4.665s 00:15:07.995 sys 0m0.400s 00:15:07.995 13:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.995 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:15:07.995 ************************************ 00:15:07.995 END TEST raid0_resize_test 00:15:07.995 ************************************ 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:07.995 13:38:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:07.995 13:38:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.995 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:15:07.995 ************************************ 00:15:07.995 START TEST raid_state_function_test 00:15:07.995 ************************************ 00:15:07.995 13:38:47 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=115175 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115175' 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:07.995 Process raid pid: 115175 00:15:07.995 13:38:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115175 /var/tmp/spdk-raid.sock 00:15:07.995 13:38:47 -- common/autotest_common.sh@819 -- # '[' -z 115175 ']' 00:15:07.995 13:38:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:07.995 13:38:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:07.995 13:38:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:07.995 13:38:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.995 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:15:07.995 [2024-07-10 13:38:47.321407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:07.995 [2024-07-10 13:38:47.321552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.254 [2024-07-10 13:38:47.477423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.514 [2024-07-10 13:38:47.664150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.514 [2024-07-10 13:38:47.860223] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.773 13:38:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.773 13:38:48 -- common/autotest_common.sh@852 -- # return 0 00:15:08.773 13:38:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:09.032 [2024-07-10 13:38:48.286201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.032 [2024-07-10 13:38:48.286276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.032 [2024-07-10 13:38:48.286286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.032 [2024-07-10 13:38:48.286299] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.032 13:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.291 13:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.291 "name": "Existed_Raid", 00:15:09.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.291 "strip_size_kb": 64, 00:15:09.291 "state": "configuring", 00:15:09.291 "raid_level": "raid0", 00:15:09.291 "superblock": false, 00:15:09.291 "num_base_bdevs": 2, 00:15:09.291 "num_base_bdevs_discovered": 0, 00:15:09.291 "num_base_bdevs_operational": 2, 00:15:09.291 "base_bdevs_list": [ 00:15:09.291 { 00:15:09.291 "name": "BaseBdev1", 00:15:09.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.291 "is_configured": false, 00:15:09.291 "data_offset": 0, 00:15:09.291 "data_size": 0 00:15:09.291 }, 00:15:09.291 { 00:15:09.291 "name": "BaseBdev2", 00:15:09.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.291 "is_configured": false, 00:15:09.291 "data_offset": 0, 00:15:09.291 "data_size": 0 00:15:09.291 } 00:15:09.291 ] 00:15:09.291 }' 00:15:09.291 13:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.291 13:38:48 -- common/autotest_common.sh@10 -- # set +x 00:15:09.859 13:38:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.118 [2024-07-10 13:38:49.292317] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.118 [2024-07-10 13:38:49.292367] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:10.118 13:38:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.118 [2024-07-10 13:38:49.472035] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.118 [2024-07-10 13:38:49.472139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.118 [2024-07-10 13:38:49.472149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.118 [2024-07-10 13:38:49.472166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.377 13:38:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.377 [2024-07-10 13:38:49.683577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.377 BaseBdev1 00:15:10.377 13:38:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:10.377 13:38:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:10.377 13:38:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:10.377 13:38:49 -- common/autotest_common.sh@889 -- # local i 00:15:10.377 13:38:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:10.377 13:38:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:10.377 13:38:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:10.636 13:38:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:10.895 [ 00:15:10.895 { 00:15:10.895 "name": "BaseBdev1", 00:15:10.895 "aliases": [ 00:15:10.895 "2c365e6d-0001-4418-bbfd-85b15a43eb10" 00:15:10.895 ], 00:15:10.895 "product_name": "Malloc disk", 00:15:10.895 "block_size": 512, 00:15:10.895 "num_blocks": 65536, 00:15:10.895 "uuid": "2c365e6d-0001-4418-bbfd-85b15a43eb10", 00:15:10.895 "assigned_rate_limits": { 00:15:10.895 "rw_ios_per_sec": 0, 00:15:10.895 "rw_mbytes_per_sec": 0, 00:15:10.895 "r_mbytes_per_sec": 0, 00:15:10.895 "w_mbytes_per_sec": 0 00:15:10.895 }, 00:15:10.895 "claimed": true, 00:15:10.895 "claim_type": "exclusive_write", 00:15:10.895 "zoned": false, 00:15:10.895 "supported_io_types": { 00:15:10.895 "read": true, 00:15:10.895 "write": true, 00:15:10.895 "unmap": true, 00:15:10.895 "write_zeroes": true, 00:15:10.895 "flush": true, 00:15:10.895 "reset": true, 00:15:10.895 "compare": false, 00:15:10.895 "compare_and_write": false, 00:15:10.895 "abort": true, 00:15:10.895 "nvme_admin": false, 00:15:10.895 "nvme_io": false 00:15:10.895 }, 00:15:10.895 "memory_domains": [ 00:15:10.895 { 00:15:10.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.895 "dma_device_type": 2 00:15:10.895 } 00:15:10.895 ], 00:15:10.895 "driver_specific": {} 00:15:10.895 } 00:15:10.895 ] 00:15:10.895 13:38:50 -- common/autotest_common.sh@895 -- # return 0 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.895 "name": "Existed_Raid", 00:15:10.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.895 "strip_size_kb": 64, 00:15:10.895 "state": "configuring", 00:15:10.895 "raid_level": "raid0", 00:15:10.895 "superblock": false, 00:15:10.895 "num_base_bdevs": 2, 00:15:10.895 "num_base_bdevs_discovered": 1, 00:15:10.895 "num_base_bdevs_operational": 2, 00:15:10.895 "base_bdevs_list": [ 00:15:10.895 { 00:15:10.895 "name": "BaseBdev1", 00:15:10.895 "uuid": "2c365e6d-0001-4418-bbfd-85b15a43eb10", 00:15:10.895 "is_configured": true, 00:15:10.895 "data_offset": 0, 00:15:10.895 "data_size": 65536 00:15:10.895 }, 00:15:10.895 { 00:15:10.895 "name": "BaseBdev2", 00:15:10.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.895 "is_configured": false, 00:15:10.895 "data_offset": 0, 00:15:10.895 "data_size": 0 00:15:10.895 } 00:15:10.895 ] 00:15:10.895 }' 00:15:10.895 13:38:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.895 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:15:11.831 13:38:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.831 [2024-07-10 13:38:50.989274] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.831 [2024-07-10 13:38:50.989338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:11.831 13:38:50 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:11.831 [2024-07-10 13:38:51.161014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.831 [2024-07-10 13:38:51.162646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.831 [2024-07-10 13:38:51.162698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.831 13:38:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.090 13:38:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.090 "name": "Existed_Raid", 00:15:12.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.090 "strip_size_kb": 64, 00:15:12.090 "state": "configuring", 00:15:12.090 "raid_level": "raid0", 00:15:12.090 "superblock": false, 00:15:12.090 "num_base_bdevs": 2, 00:15:12.090 "num_base_bdevs_discovered": 1, 00:15:12.090 "num_base_bdevs_operational": 2, 00:15:12.090 "base_bdevs_list": [ 00:15:12.090 { 00:15:12.090 "name": "BaseBdev1", 00:15:12.090 "uuid": "2c365e6d-0001-4418-bbfd-85b15a43eb10", 00:15:12.090 "is_configured": true, 00:15:12.090 "data_offset": 0, 00:15:12.090 "data_size": 65536 00:15:12.090 }, 00:15:12.090 { 00:15:12.090 "name": "BaseBdev2", 00:15:12.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.090 "is_configured": false, 00:15:12.090 "data_offset": 0, 00:15:12.090 "data_size": 0 00:15:12.090 } 00:15:12.090 ] 00:15:12.090 }' 00:15:12.090 13:38:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.090 13:38:51 -- common/autotest_common.sh@10 -- # set +x 00:15:12.657 13:38:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:12.916 [2024-07-10 13:38:52.184726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.916 [2024-07-10 13:38:52.184772] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:12.916 [2024-07-10 13:38:52.184787] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:12.916 [2024-07-10 13:38:52.184894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:12.916 [2024-07-10 13:38:52.185160] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:12.916 [2024-07-10 13:38:52.185177] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:12.916 [2024-07-10 13:38:52.185452] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.916 BaseBdev2 00:15:12.916 13:38:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:12.916 13:38:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:12.916 13:38:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:12.916 13:38:52 -- common/autotest_common.sh@889 -- # local i 00:15:12.916 13:38:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:12.916 13:38:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:12.916 13:38:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:13.174 13:38:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.433 [ 00:15:13.433 { 00:15:13.433 "name": "BaseBdev2", 00:15:13.433 "aliases": [ 00:15:13.433 "7d7b9df8-c753-4521-b6b6-c542c115c0ec" 00:15:13.433 ], 00:15:13.433 "product_name": "Malloc disk", 00:15:13.433 "block_size": 512, 00:15:13.433 "num_blocks": 65536, 00:15:13.433 "uuid": "7d7b9df8-c753-4521-b6b6-c542c115c0ec", 00:15:13.433 "assigned_rate_limits": { 00:15:13.433 "rw_ios_per_sec": 0, 00:15:13.433 "rw_mbytes_per_sec": 0, 00:15:13.433 "r_mbytes_per_sec": 0, 00:15:13.433 "w_mbytes_per_sec": 0 00:15:13.433 }, 00:15:13.433 "claimed": true, 00:15:13.433 "claim_type": "exclusive_write", 00:15:13.433 "zoned": false, 00:15:13.433 "supported_io_types": { 00:15:13.433 "read": true, 00:15:13.433 "write": true, 00:15:13.433 "unmap": true, 00:15:13.433 "write_zeroes": true, 00:15:13.433 "flush": true, 00:15:13.433 "reset": true, 00:15:13.433 "compare": false, 00:15:13.433 "compare_and_write": false, 00:15:13.433 "abort": true, 00:15:13.433 "nvme_admin": false, 00:15:13.433 "nvme_io": false 00:15:13.433 }, 00:15:13.433 "memory_domains": [ 00:15:13.433 { 00:15:13.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.433 "dma_device_type": 2 00:15:13.433 } 00:15:13.433 ], 00:15:13.433 "driver_specific": {} 00:15:13.433 } 00:15:13.433 ] 00:15:13.433 13:38:52 -- common/autotest_common.sh@895 -- # return 0 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.433 "name": "Existed_Raid", 00:15:13.433 "uuid": "a3f80f00-1f66-4c99-b90f-2b32cc68fd3b", 00:15:13.433 "strip_size_kb": 64, 00:15:13.433 "state": "online", 00:15:13.433 "raid_level": "raid0", 00:15:13.433 "superblock": false, 00:15:13.433 "num_base_bdevs": 2, 00:15:13.433 "num_base_bdevs_discovered": 2, 00:15:13.433 "num_base_bdevs_operational": 2, 00:15:13.433 "base_bdevs_list": [ 00:15:13.433 { 00:15:13.433 "name": "BaseBdev1", 00:15:13.433 "uuid": "2c365e6d-0001-4418-bbfd-85b15a43eb10", 00:15:13.433 "is_configured": true, 00:15:13.433 "data_offset": 0, 00:15:13.433 "data_size": 65536 00:15:13.433 }, 00:15:13.433 { 00:15:13.433 "name": "BaseBdev2", 00:15:13.433 "uuid": "7d7b9df8-c753-4521-b6b6-c542c115c0ec", 00:15:13.433 "is_configured": true, 00:15:13.433 "data_offset": 0, 00:15:13.433 "data_size": 65536 00:15:13.433 } 00:15:13.433 ] 00:15:13.433 }' 00:15:13.433 13:38:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.433 13:38:52 -- common/autotest_common.sh@10 -- # set +x 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.369 [2024-07-10 13:38:53.542388] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.369 [2024-07-10 13:38:53.542430] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.369 [2024-07-10 13:38:53.542486] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.369 13:38:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.627 13:38:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.627 "name": "Existed_Raid", 00:15:14.627 "uuid": "a3f80f00-1f66-4c99-b90f-2b32cc68fd3b", 00:15:14.628 "strip_size_kb": 64, 00:15:14.628 "state": "offline", 00:15:14.628 "raid_level": "raid0", 00:15:14.628 "superblock": false, 00:15:14.628 "num_base_bdevs": 2, 00:15:14.628 "num_base_bdevs_discovered": 1, 00:15:14.628 "num_base_bdevs_operational": 1, 00:15:14.628 "base_bdevs_list": [ 00:15:14.628 { 00:15:14.628 "name": null, 00:15:14.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.628 "is_configured": false, 00:15:14.628 "data_offset": 0, 00:15:14.628 "data_size": 65536 00:15:14.628 }, 00:15:14.628 { 00:15:14.628 "name": "BaseBdev2", 00:15:14.628 "uuid": "7d7b9df8-c753-4521-b6b6-c542c115c0ec", 00:15:14.628 "is_configured": true, 00:15:14.628 "data_offset": 0, 00:15:14.628 "data_size": 65536 00:15:14.628 } 00:15:14.628 ] 00:15:14.628 }' 00:15:14.628 13:38:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.628 13:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:15.194 13:38:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:15.194 13:38:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:15.194 13:38:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.194 13:38:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:15.451 13:38:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:15.451 13:38:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.451 13:38:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:15.451 [2024-07-10 13:38:54.805584] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.451 [2024-07-10 13:38:54.805653] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:15.734 13:38:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:15.734 13:38:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:15.734 13:38:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.734 13:38:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:15.997 13:38:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:15.997 13:38:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:15.997 13:38:55 -- bdev/bdev_raid.sh@287 -- # killprocess 115175 00:15:15.997 13:38:55 -- common/autotest_common.sh@926 -- # '[' -z 115175 ']' 00:15:15.997 13:38:55 -- common/autotest_common.sh@930 -- # kill -0 115175 00:15:15.997 13:38:55 -- common/autotest_common.sh@931 -- # uname 00:15:15.997 13:38:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:15.997 13:38:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115175 00:15:15.997 killing process with pid 115175 00:15:15.997 13:38:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:15.997 13:38:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:15.997 13:38:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115175' 00:15:15.997 13:38:55 -- common/autotest_common.sh@945 -- # kill 115175 00:15:15.997 13:38:55 -- common/autotest_common.sh@950 -- # wait 115175 00:15:15.997 [2024-07-10 13:38:55.130636] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.997 [2024-07-10 13:38:55.130747] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.370 ************************************ 00:15:17.370 END TEST raid_state_function_test 00:15:17.370 ************************************ 00:15:17.370 13:38:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:17.370 00:15:17.371 real 0m9.163s 00:15:17.371 user 0m15.526s 00:15:17.371 sys 0m1.030s 00:15:17.371 13:38:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.371 13:38:56 -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:17.371 13:38:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:17.371 13:38:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.371 13:38:56 -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 ************************************ 00:15:17.371 START TEST raid_state_function_test_sb 00:15:17.371 ************************************ 00:15:17.371 13:38:56 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=115496 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115496' 00:15:17.371 Process raid pid: 115496 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115496 /var/tmp/spdk-raid.sock 00:15:17.371 13:38:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.371 13:38:56 -- common/autotest_common.sh@819 -- # '[' -z 115496 ']' 00:15:17.371 13:38:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.371 13:38:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.371 13:38:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.371 13:38:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.371 13:38:56 -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 [2024-07-10 13:38:56.544470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:17.371 [2024-07-10 13:38:56.544654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.371 [2024-07-10 13:38:56.683881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.629 [2024-07-10 13:38:56.871984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.886 [2024-07-10 13:38:57.075445] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.145 13:38:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.145 13:38:57 -- common/autotest_common.sh@852 -- # return 0 00:15:18.145 13:38:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:18.402 [2024-07-10 13:38:57.527997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.402 [2024-07-10 13:38:57.528080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.402 [2024-07-10 13:38:57.528103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.402 [2024-07-10 13:38:57.528117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:18.402 13:38:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.403 "name": "Existed_Raid", 00:15:18.403 "uuid": "9c9930e3-75f4-48db-b07b-12713eb44775", 00:15:18.403 "strip_size_kb": 64, 00:15:18.403 "state": "configuring", 00:15:18.403 "raid_level": "raid0", 00:15:18.403 "superblock": true, 00:15:18.403 "num_base_bdevs": 2, 00:15:18.403 "num_base_bdevs_discovered": 0, 00:15:18.403 "num_base_bdevs_operational": 2, 00:15:18.403 "base_bdevs_list": [ 00:15:18.403 { 00:15:18.403 "name": "BaseBdev1", 00:15:18.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.403 "is_configured": false, 00:15:18.403 "data_offset": 0, 00:15:18.403 "data_size": 0 00:15:18.403 }, 00:15:18.403 { 00:15:18.403 "name": "BaseBdev2", 00:15:18.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.403 "is_configured": false, 00:15:18.403 "data_offset": 0, 00:15:18.403 "data_size": 0 00:15:18.403 } 00:15:18.403 ] 00:15:18.403 }' 00:15:18.403 13:38:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.403 13:38:57 -- common/autotest_common.sh@10 -- # set +x 00:15:19.336 13:38:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.336 [2024-07-10 13:38:58.518059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.336 [2024-07-10 13:38:58.518100] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:19.336 13:38:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.594 [2024-07-10 13:38:58.693802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.594 [2024-07-10 13:38:58.693876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.594 [2024-07-10 13:38:58.693884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.594 [2024-07-10 13:38:58.693902] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.594 13:38:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.594 [2024-07-10 13:38:58.915246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.594 BaseBdev1 00:15:19.594 13:38:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:19.594 13:38:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:19.594 13:38:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:19.594 13:38:58 -- common/autotest_common.sh@889 -- # local i 00:15:19.594 13:38:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:19.594 13:38:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:19.594 13:38:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.852 13:38:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.110 [ 00:15:20.110 { 00:15:20.110 "name": "BaseBdev1", 00:15:20.110 "aliases": [ 00:15:20.110 "22e232d0-72d9-4fa7-bd77-c6031ca1f70b" 00:15:20.110 ], 00:15:20.110 "product_name": "Malloc disk", 00:15:20.110 "block_size": 512, 00:15:20.110 "num_blocks": 65536, 00:15:20.110 "uuid": "22e232d0-72d9-4fa7-bd77-c6031ca1f70b", 00:15:20.110 "assigned_rate_limits": { 00:15:20.110 "rw_ios_per_sec": 0, 00:15:20.110 "rw_mbytes_per_sec": 0, 00:15:20.110 "r_mbytes_per_sec": 0, 00:15:20.110 "w_mbytes_per_sec": 0 00:15:20.110 }, 00:15:20.110 "claimed": true, 00:15:20.110 "claim_type": "exclusive_write", 00:15:20.110 "zoned": false, 00:15:20.110 "supported_io_types": { 00:15:20.110 "read": true, 00:15:20.110 "write": true, 00:15:20.110 "unmap": true, 00:15:20.110 "write_zeroes": true, 00:15:20.110 "flush": true, 00:15:20.110 "reset": true, 00:15:20.110 "compare": false, 00:15:20.110 "compare_and_write": false, 00:15:20.110 "abort": true, 00:15:20.110 "nvme_admin": false, 00:15:20.110 "nvme_io": false 00:15:20.110 }, 00:15:20.110 "memory_domains": [ 00:15:20.110 { 00:15:20.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.110 "dma_device_type": 2 00:15:20.110 } 00:15:20.110 ], 00:15:20.110 "driver_specific": {} 00:15:20.110 } 00:15:20.110 ] 00:15:20.110 13:38:59 -- common/autotest_common.sh@895 -- # return 0 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.110 13:38:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.368 13:38:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.368 "name": "Existed_Raid", 00:15:20.368 "uuid": "9450abdf-e291-45aa-9c43-c9a2697ea315", 00:15:20.368 "strip_size_kb": 64, 00:15:20.368 "state": "configuring", 00:15:20.368 "raid_level": "raid0", 00:15:20.368 "superblock": true, 00:15:20.369 "num_base_bdevs": 2, 00:15:20.369 "num_base_bdevs_discovered": 1, 00:15:20.369 "num_base_bdevs_operational": 2, 00:15:20.369 "base_bdevs_list": [ 00:15:20.369 { 00:15:20.369 "name": "BaseBdev1", 00:15:20.369 "uuid": "22e232d0-72d9-4fa7-bd77-c6031ca1f70b", 00:15:20.369 "is_configured": true, 00:15:20.369 "data_offset": 2048, 00:15:20.369 "data_size": 63488 00:15:20.369 }, 00:15:20.369 { 00:15:20.369 "name": "BaseBdev2", 00:15:20.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.369 "is_configured": false, 00:15:20.369 "data_offset": 0, 00:15:20.369 "data_size": 0 00:15:20.369 } 00:15:20.369 ] 00:15:20.369 }' 00:15:20.369 13:38:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.369 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:15:20.984 13:39:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:20.985 [2024-07-10 13:39:00.276897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.985 [2024-07-10 13:39:00.276962] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:20.985 13:39:00 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:20.985 13:39:00 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:21.273 13:39:00 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.538 BaseBdev1 00:15:21.538 13:39:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:21.538 13:39:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:21.538 13:39:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:21.538 13:39:00 -- common/autotest_common.sh@889 -- # local i 00:15:21.538 13:39:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:21.538 13:39:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:21.538 13:39:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.797 13:39:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.055 [ 00:15:22.055 { 00:15:22.055 "name": "BaseBdev1", 00:15:22.055 "aliases": [ 00:15:22.055 "7afe7b63-7166-4971-838a-ec40cd915122" 00:15:22.055 ], 00:15:22.055 "product_name": "Malloc disk", 00:15:22.055 "block_size": 512, 00:15:22.055 "num_blocks": 65536, 00:15:22.055 "uuid": "7afe7b63-7166-4971-838a-ec40cd915122", 00:15:22.055 "assigned_rate_limits": { 00:15:22.055 "rw_ios_per_sec": 0, 00:15:22.055 "rw_mbytes_per_sec": 0, 00:15:22.055 "r_mbytes_per_sec": 0, 00:15:22.055 "w_mbytes_per_sec": 0 00:15:22.055 }, 00:15:22.055 "claimed": false, 00:15:22.055 "zoned": false, 00:15:22.055 "supported_io_types": { 00:15:22.055 "read": true, 00:15:22.055 "write": true, 00:15:22.055 "unmap": true, 00:15:22.055 "write_zeroes": true, 00:15:22.055 "flush": true, 00:15:22.055 "reset": true, 00:15:22.055 "compare": false, 00:15:22.055 "compare_and_write": false, 00:15:22.055 "abort": true, 00:15:22.055 "nvme_admin": false, 00:15:22.055 "nvme_io": false 00:15:22.055 }, 00:15:22.055 "memory_domains": [ 00:15:22.055 { 00:15:22.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.055 "dma_device_type": 2 00:15:22.055 } 00:15:22.055 ], 00:15:22.055 "driver_specific": {} 00:15:22.055 } 00:15:22.055 ] 00:15:22.055 13:39:01 -- common/autotest_common.sh@895 -- # return 0 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.055 [2024-07-10 13:39:01.360307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.055 [2024-07-10 13:39:01.361963] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.055 [2024-07-10 13:39:01.362024] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.055 13:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.313 13:39:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.313 "name": "Existed_Raid", 00:15:22.313 "uuid": "f87dd111-7ee9-4de0-a136-93e3ae9ec066", 00:15:22.313 "strip_size_kb": 64, 00:15:22.313 "state": "configuring", 00:15:22.313 "raid_level": "raid0", 00:15:22.313 "superblock": true, 00:15:22.313 "num_base_bdevs": 2, 00:15:22.313 "num_base_bdevs_discovered": 1, 00:15:22.313 "num_base_bdevs_operational": 2, 00:15:22.313 "base_bdevs_list": [ 00:15:22.313 { 00:15:22.313 "name": "BaseBdev1", 00:15:22.313 "uuid": "7afe7b63-7166-4971-838a-ec40cd915122", 00:15:22.313 "is_configured": true, 00:15:22.313 "data_offset": 2048, 00:15:22.313 "data_size": 63488 00:15:22.313 }, 00:15:22.313 { 00:15:22.313 "name": "BaseBdev2", 00:15:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.313 "is_configured": false, 00:15:22.313 "data_offset": 0, 00:15:22.313 "data_size": 0 00:15:22.313 } 00:15:22.313 ] 00:15:22.313 }' 00:15:22.313 13:39:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.313 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:15:22.879 13:39:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.138 [2024-07-10 13:39:02.405975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.138 [2024-07-10 13:39:02.406167] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:23.138 [2024-07-10 13:39:02.406177] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.138 [2024-07-10 13:39:02.406336] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:23.138 BaseBdev2 00:15:23.138 [2024-07-10 13:39:02.406635] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:23.138 [2024-07-10 13:39:02.406656] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:23.138 [2024-07-10 13:39:02.406792] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.138 13:39:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:23.138 13:39:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:23.138 13:39:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:23.138 13:39:02 -- common/autotest_common.sh@889 -- # local i 00:15:23.138 13:39:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:23.138 13:39:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:23.138 13:39:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.396 13:39:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.655 [ 00:15:23.655 { 00:15:23.655 "name": "BaseBdev2", 00:15:23.655 "aliases": [ 00:15:23.655 "fd4b86db-bd63-4589-8a02-eeb488c2c817" 00:15:23.655 ], 00:15:23.655 "product_name": "Malloc disk", 00:15:23.655 "block_size": 512, 00:15:23.655 "num_blocks": 65536, 00:15:23.655 "uuid": "fd4b86db-bd63-4589-8a02-eeb488c2c817", 00:15:23.655 "assigned_rate_limits": { 00:15:23.655 "rw_ios_per_sec": 0, 00:15:23.655 "rw_mbytes_per_sec": 0, 00:15:23.655 "r_mbytes_per_sec": 0, 00:15:23.655 "w_mbytes_per_sec": 0 00:15:23.655 }, 00:15:23.655 "claimed": true, 00:15:23.655 "claim_type": "exclusive_write", 00:15:23.655 "zoned": false, 00:15:23.655 "supported_io_types": { 00:15:23.655 "read": true, 00:15:23.655 "write": true, 00:15:23.655 "unmap": true, 00:15:23.655 "write_zeroes": true, 00:15:23.655 "flush": true, 00:15:23.655 "reset": true, 00:15:23.655 "compare": false, 00:15:23.655 "compare_and_write": false, 00:15:23.655 "abort": true, 00:15:23.655 "nvme_admin": false, 00:15:23.655 "nvme_io": false 00:15:23.655 }, 00:15:23.655 "memory_domains": [ 00:15:23.655 { 00:15:23.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.655 "dma_device_type": 2 00:15:23.655 } 00:15:23.655 ], 00:15:23.655 "driver_specific": {} 00:15:23.655 } 00:15:23.655 ] 00:15:23.655 13:39:02 -- common/autotest_common.sh@895 -- # return 0 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.655 "name": "Existed_Raid", 00:15:23.655 "uuid": "f87dd111-7ee9-4de0-a136-93e3ae9ec066", 00:15:23.655 "strip_size_kb": 64, 00:15:23.655 "state": "online", 00:15:23.655 "raid_level": "raid0", 00:15:23.655 "superblock": true, 00:15:23.655 "num_base_bdevs": 2, 00:15:23.655 "num_base_bdevs_discovered": 2, 00:15:23.655 "num_base_bdevs_operational": 2, 00:15:23.655 "base_bdevs_list": [ 00:15:23.655 { 00:15:23.655 "name": "BaseBdev1", 00:15:23.655 "uuid": "7afe7b63-7166-4971-838a-ec40cd915122", 00:15:23.655 "is_configured": true, 00:15:23.655 "data_offset": 2048, 00:15:23.655 "data_size": 63488 00:15:23.655 }, 00:15:23.655 { 00:15:23.655 "name": "BaseBdev2", 00:15:23.655 "uuid": "fd4b86db-bd63-4589-8a02-eeb488c2c817", 00:15:23.655 "is_configured": true, 00:15:23.655 "data_offset": 2048, 00:15:23.655 "data_size": 63488 00:15:23.655 } 00:15:23.655 ] 00:15:23.655 }' 00:15:23.655 13:39:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.655 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:24.589 [2024-07-10 13:39:03.783734] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.589 [2024-07-10 13:39:03.783776] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.589 [2024-07-10 13:39:03.783861] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.589 13:39:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.848 13:39:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.848 "name": "Existed_Raid", 00:15:24.848 "uuid": "f87dd111-7ee9-4de0-a136-93e3ae9ec066", 00:15:24.848 "strip_size_kb": 64, 00:15:24.848 "state": "offline", 00:15:24.848 "raid_level": "raid0", 00:15:24.848 "superblock": true, 00:15:24.848 "num_base_bdevs": 2, 00:15:24.848 "num_base_bdevs_discovered": 1, 00:15:24.848 "num_base_bdevs_operational": 1, 00:15:24.848 "base_bdevs_list": [ 00:15:24.848 { 00:15:24.848 "name": null, 00:15:24.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.848 "is_configured": false, 00:15:24.848 "data_offset": 2048, 00:15:24.848 "data_size": 63488 00:15:24.848 }, 00:15:24.848 { 00:15:24.848 "name": "BaseBdev2", 00:15:24.848 "uuid": "fd4b86db-bd63-4589-8a02-eeb488c2c817", 00:15:24.848 "is_configured": true, 00:15:24.848 "data_offset": 2048, 00:15:24.848 "data_size": 63488 00:15:24.848 } 00:15:24.848 ] 00:15:24.848 }' 00:15:24.848 13:39:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.848 13:39:04 -- common/autotest_common.sh@10 -- # set +x 00:15:25.416 13:39:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:25.416 13:39:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:25.416 13:39:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.416 13:39:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:25.675 13:39:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:25.675 13:39:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.675 13:39:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:25.935 [2024-07-10 13:39:05.103612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.935 [2024-07-10 13:39:05.103692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:25.935 13:39:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:25.935 13:39:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:25.935 13:39:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.935 13:39:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.241 13:39:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:26.241 13:39:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:26.241 13:39:05 -- bdev/bdev_raid.sh@287 -- # killprocess 115496 00:15:26.241 13:39:05 -- common/autotest_common.sh@926 -- # '[' -z 115496 ']' 00:15:26.241 13:39:05 -- common/autotest_common.sh@930 -- # kill -0 115496 00:15:26.241 13:39:05 -- common/autotest_common.sh@931 -- # uname 00:15:26.241 13:39:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:26.241 13:39:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115496 00:15:26.241 killing process with pid 115496 00:15:26.241 13:39:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:26.241 13:39:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:26.241 13:39:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115496' 00:15:26.241 13:39:05 -- common/autotest_common.sh@945 -- # kill 115496 00:15:26.241 13:39:05 -- common/autotest_common.sh@950 -- # wait 115496 00:15:26.241 [2024-07-10 13:39:05.447161] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.241 [2024-07-10 13:39:05.447405] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.660 ************************************ 00:15:27.660 END TEST raid_state_function_test_sb 00:15:27.660 ************************************ 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:27.660 00:15:27.660 real 0m10.229s 00:15:27.660 user 0m17.475s 00:15:27.660 sys 0m1.190s 00:15:27.660 13:39:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.660 13:39:06 -- common/autotest_common.sh@10 -- # set +x 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:27.660 13:39:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:27.660 13:39:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:27.660 13:39:06 -- common/autotest_common.sh@10 -- # set +x 00:15:27.660 ************************************ 00:15:27.660 START TEST raid_superblock_test 00:15:27.660 ************************************ 00:15:27.660 13:39:06 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@357 -- # raid_pid=115837 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:27.660 13:39:06 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115837 /var/tmp/spdk-raid.sock 00:15:27.660 13:39:06 -- common/autotest_common.sh@819 -- # '[' -z 115837 ']' 00:15:27.660 13:39:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:27.660 13:39:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:27.660 13:39:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:27.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:27.660 13:39:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:27.660 13:39:06 -- common/autotest_common.sh@10 -- # set +x 00:15:27.660 [2024-07-10 13:39:06.838870] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:27.660 [2024-07-10 13:39:06.839028] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115837 ] 00:15:27.660 [2024-07-10 13:39:06.998449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.919 [2024-07-10 13:39:07.195488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.178 [2024-07-10 13:39:07.387453] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.438 13:39:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:28.438 13:39:07 -- common/autotest_common.sh@852 -- # return 0 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.438 13:39:07 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:28.697 malloc1 00:15:28.697 13:39:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:28.697 [2024-07-10 13:39:08.041588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:28.697 [2024-07-10 13:39:08.041691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.697 [2024-07-10 13:39:08.041717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:28.697 [2024-07-10 13:39:08.041757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.697 [2024-07-10 13:39:08.043861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.697 [2024-07-10 13:39:08.043931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:28.697 pt1 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.955 13:39:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:28.955 malloc2 00:15:28.956 13:39:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.215 [2024-07-10 13:39:08.490702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.215 [2024-07-10 13:39:08.490776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.215 [2024-07-10 13:39:08.490811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:29.215 [2024-07-10 13:39:08.490850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.215 [2024-07-10 13:39:08.492865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.215 [2024-07-10 13:39:08.492916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.215 pt2 00:15:29.215 13:39:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:29.215 13:39:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:29.215 13:39:08 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:29.475 [2024-07-10 13:39:08.646508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.475 [2024-07-10 13:39:08.648128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.475 [2024-07-10 13:39:08.648284] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:29.475 [2024-07-10 13:39:08.648306] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:29.475 [2024-07-10 13:39:08.648448] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:29.475 [2024-07-10 13:39:08.648747] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:29.475 [2024-07-10 13:39:08.648763] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:29.475 [2024-07-10 13:39:08.648904] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.475 13:39:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.734 13:39:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.734 "name": "raid_bdev1", 00:15:29.734 "uuid": "1d700b0f-5693-4dcd-bb9b-1c336eab8f78", 00:15:29.734 "strip_size_kb": 64, 00:15:29.734 "state": "online", 00:15:29.734 "raid_level": "raid0", 00:15:29.734 "superblock": true, 00:15:29.734 "num_base_bdevs": 2, 00:15:29.734 "num_base_bdevs_discovered": 2, 00:15:29.734 "num_base_bdevs_operational": 2, 00:15:29.734 "base_bdevs_list": [ 00:15:29.734 { 00:15:29.734 "name": "pt1", 00:15:29.734 "uuid": "ff361fa5-8e57-512e-a61e-a287d081cb28", 00:15:29.734 "is_configured": true, 00:15:29.734 "data_offset": 2048, 00:15:29.734 "data_size": 63488 00:15:29.734 }, 00:15:29.734 { 00:15:29.734 "name": "pt2", 00:15:29.734 "uuid": "233ea899-5f8c-5863-a2c0-551e93ea41ce", 00:15:29.734 "is_configured": true, 00:15:29.734 "data_offset": 2048, 00:15:29.734 "data_size": 63488 00:15:29.734 } 00:15:29.734 ] 00:15:29.734 }' 00:15:29.734 13:39:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.734 13:39:08 -- common/autotest_common.sh@10 -- # set +x 00:15:30.302 13:39:09 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:30.302 13:39:09 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:30.561 [2024-07-10 13:39:09.680801] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.561 13:39:09 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1d700b0f-5693-4dcd-bb9b-1c336eab8f78 00:15:30.561 13:39:09 -- bdev/bdev_raid.sh@380 -- # '[' -z 1d700b0f-5693-4dcd-bb9b-1c336eab8f78 ']' 00:15:30.561 13:39:09 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:30.561 [2024-07-10 13:39:09.868294] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.561 [2024-07-10 13:39:09.868334] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.561 [2024-07-10 13:39:09.868418] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.561 [2024-07-10 13:39:09.868467] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.561 [2024-07-10 13:39:09.868475] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:30.561 13:39:09 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.561 13:39:09 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:30.820 13:39:10 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:30.820 13:39:10 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:30.820 13:39:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:30.820 13:39:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:31.078 13:39:10 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.078 13:39:10 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:31.078 13:39:10 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:31.078 13:39:10 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:31.339 13:39:10 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:31.339 13:39:10 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:31.339 13:39:10 -- common/autotest_common.sh@640 -- # local es=0 00:15:31.339 13:39:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:31.339 13:39:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.339 13:39:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:31.339 13:39:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.339 13:39:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:31.339 13:39:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.339 13:39:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:31.339 13:39:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.339 13:39:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:31.339 13:39:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:31.607 [2024-07-10 13:39:10.762675] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:31.607 [2024-07-10 13:39:10.764301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:31.607 [2024-07-10 13:39:10.764363] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:31.607 [2024-07-10 13:39:10.764416] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:31.607 [2024-07-10 13:39:10.764444] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.607 [2024-07-10 13:39:10.764452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:31.607 request: 00:15:31.607 { 00:15:31.607 "name": "raid_bdev1", 00:15:31.607 "raid_level": "raid0", 00:15:31.607 "base_bdevs": [ 00:15:31.607 "malloc1", 00:15:31.607 "malloc2" 00:15:31.607 ], 00:15:31.607 "superblock": false, 00:15:31.607 "strip_size_kb": 64, 00:15:31.607 "method": "bdev_raid_create", 00:15:31.607 "req_id": 1 00:15:31.607 } 00:15:31.607 Got JSON-RPC error response 00:15:31.607 response: 00:15:31.607 { 00:15:31.607 "code": -17, 00:15:31.607 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:31.607 } 00:15:31.607 13:39:10 -- common/autotest_common.sh@643 -- # es=1 00:15:31.607 13:39:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:31.607 13:39:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:31.607 13:39:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:31.607 13:39:10 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:31.607 13:39:10 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.867 13:39:10 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:31.867 13:39:10 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:31.867 13:39:10 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.867 [2024-07-10 13:39:11.149943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.867 [2024-07-10 13:39:11.150035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.867 [2024-07-10 13:39:11.150064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:31.867 [2024-07-10 13:39:11.150083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.867 [2024-07-10 13:39:11.151979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.867 [2024-07-10 13:39:11.152036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.867 [2024-07-10 13:39:11.152152] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:31.867 [2024-07-10 13:39:11.152241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.867 pt1 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.867 13:39:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.127 13:39:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.127 "name": "raid_bdev1", 00:15:32.127 "uuid": "1d700b0f-5693-4dcd-bb9b-1c336eab8f78", 00:15:32.127 "strip_size_kb": 64, 00:15:32.127 "state": "configuring", 00:15:32.127 "raid_level": "raid0", 00:15:32.127 "superblock": true, 00:15:32.127 "num_base_bdevs": 2, 00:15:32.127 "num_base_bdevs_discovered": 1, 00:15:32.127 "num_base_bdevs_operational": 2, 00:15:32.127 "base_bdevs_list": [ 00:15:32.127 { 00:15:32.127 "name": "pt1", 00:15:32.127 "uuid": "ff361fa5-8e57-512e-a61e-a287d081cb28", 00:15:32.127 "is_configured": true, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 }, 00:15:32.127 { 00:15:32.127 "name": null, 00:15:32.127 "uuid": "233ea899-5f8c-5863-a2c0-551e93ea41ce", 00:15:32.127 "is_configured": false, 00:15:32.127 "data_offset": 2048, 00:15:32.127 "data_size": 63488 00:15:32.127 } 00:15:32.127 ] 00:15:32.127 }' 00:15:32.127 13:39:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.127 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:15:32.697 13:39:11 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:32.697 13:39:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:32.697 13:39:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:32.697 13:39:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.957 [2024-07-10 13:39:12.088303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.957 [2024-07-10 13:39:12.088401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.957 [2024-07-10 13:39:12.088428] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:32.957 [2024-07-10 13:39:12.088447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.957 [2024-07-10 13:39:12.088875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.957 [2024-07-10 13:39:12.088912] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.957 [2024-07-10 13:39:12.089038] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:32.957 [2024-07-10 13:39:12.089078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.957 [2024-07-10 13:39:12.089183] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:32.957 [2024-07-10 13:39:12.089214] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:32.957 [2024-07-10 13:39:12.089342] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:32.957 [2024-07-10 13:39:12.089623] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:32.957 [2024-07-10 13:39:12.089641] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:32.957 [2024-07-10 13:39:12.089775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.957 pt2 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.957 "name": "raid_bdev1", 00:15:32.957 "uuid": "1d700b0f-5693-4dcd-bb9b-1c336eab8f78", 00:15:32.957 "strip_size_kb": 64, 00:15:32.957 "state": "online", 00:15:32.957 "raid_level": "raid0", 00:15:32.957 "superblock": true, 00:15:32.957 "num_base_bdevs": 2, 00:15:32.957 "num_base_bdevs_discovered": 2, 00:15:32.957 "num_base_bdevs_operational": 2, 00:15:32.957 "base_bdevs_list": [ 00:15:32.957 { 00:15:32.957 "name": "pt1", 00:15:32.957 "uuid": "ff361fa5-8e57-512e-a61e-a287d081cb28", 00:15:32.957 "is_configured": true, 00:15:32.957 "data_offset": 2048, 00:15:32.957 "data_size": 63488 00:15:32.957 }, 00:15:32.957 { 00:15:32.957 "name": "pt2", 00:15:32.957 "uuid": "233ea899-5f8c-5863-a2c0-551e93ea41ce", 00:15:32.957 "is_configured": true, 00:15:32.957 "data_offset": 2048, 00:15:32.957 "data_size": 63488 00:15:32.957 } 00:15:32.957 ] 00:15:32.957 }' 00:15:32.957 13:39:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.957 13:39:12 -- common/autotest_common.sh@10 -- # set +x 00:15:33.526 13:39:12 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:33.527 13:39:12 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:33.787 [2024-07-10 13:39:13.058821] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.787 13:39:13 -- bdev/bdev_raid.sh@430 -- # '[' 1d700b0f-5693-4dcd-bb9b-1c336eab8f78 '!=' 1d700b0f-5693-4dcd-bb9b-1c336eab8f78 ']' 00:15:33.787 13:39:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:33.787 13:39:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:33.787 13:39:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:33.787 13:39:13 -- bdev/bdev_raid.sh@511 -- # killprocess 115837 00:15:33.787 13:39:13 -- common/autotest_common.sh@926 -- # '[' -z 115837 ']' 00:15:33.787 13:39:13 -- common/autotest_common.sh@930 -- # kill -0 115837 00:15:33.787 13:39:13 -- common/autotest_common.sh@931 -- # uname 00:15:33.787 13:39:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:33.787 13:39:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115837 00:15:33.787 killing process with pid 115837 00:15:33.787 13:39:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:33.787 13:39:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:33.787 13:39:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115837' 00:15:33.787 13:39:13 -- common/autotest_common.sh@945 -- # kill 115837 00:15:33.787 13:39:13 -- common/autotest_common.sh@950 -- # wait 115837 00:15:33.787 [2024-07-10 13:39:13.087829] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.787 [2024-07-10 13:39:13.087890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.787 [2024-07-10 13:39:13.087954] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.787 [2024-07-10 13:39:13.087967] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:34.046 [2024-07-10 13:39:13.277560] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.426 ************************************ 00:15:35.426 END TEST raid_superblock_test 00:15:35.426 ************************************ 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:35.426 00:15:35.426 real 0m7.767s 00:15:35.426 user 0m12.921s 00:15:35.426 sys 0m0.876s 00:15:35.426 13:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.426 13:39:14 -- common/autotest_common.sh@10 -- # set +x 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:35.426 13:39:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:35.426 13:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:35.426 13:39:14 -- common/autotest_common.sh@10 -- # set +x 00:15:35.426 ************************************ 00:15:35.426 START TEST raid_state_function_test 00:15:35.426 ************************************ 00:15:35.426 13:39:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=116102 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116102' 00:15:35.426 Process raid pid: 116102 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:35.426 13:39:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116102 /var/tmp/spdk-raid.sock 00:15:35.426 13:39:14 -- common/autotest_common.sh@819 -- # '[' -z 116102 ']' 00:15:35.426 13:39:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:35.426 13:39:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:35.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:35.426 13:39:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:35.426 13:39:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:35.426 13:39:14 -- common/autotest_common.sh@10 -- # set +x 00:15:35.426 [2024-07-10 13:39:14.675471] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:35.426 [2024-07-10 13:39:14.676228] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.685 [2024-07-10 13:39:14.817002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.685 [2024-07-10 13:39:15.017148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.945 [2024-07-10 13:39:15.227780] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.205 13:39:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:36.205 13:39:15 -- common/autotest_common.sh@852 -- # return 0 00:15:36.205 13:39:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:36.492 [2024-07-10 13:39:15.678173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.492 [2024-07-10 13:39:15.678237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.492 [2024-07-10 13:39:15.678248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.492 [2024-07-10 13:39:15.678261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.492 13:39:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.751 13:39:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.751 "name": "Existed_Raid", 00:15:36.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.751 "strip_size_kb": 64, 00:15:36.751 "state": "configuring", 00:15:36.751 "raid_level": "concat", 00:15:36.751 "superblock": false, 00:15:36.751 "num_base_bdevs": 2, 00:15:36.751 "num_base_bdevs_discovered": 0, 00:15:36.751 "num_base_bdevs_operational": 2, 00:15:36.751 "base_bdevs_list": [ 00:15:36.751 { 00:15:36.751 "name": "BaseBdev1", 00:15:36.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.751 "is_configured": false, 00:15:36.751 "data_offset": 0, 00:15:36.751 "data_size": 0 00:15:36.751 }, 00:15:36.751 { 00:15:36.751 "name": "BaseBdev2", 00:15:36.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.751 "is_configured": false, 00:15:36.751 "data_offset": 0, 00:15:36.751 "data_size": 0 00:15:36.751 } 00:15:36.751 ] 00:15:36.751 }' 00:15:36.751 13:39:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.751 13:39:15 -- common/autotest_common.sh@10 -- # set +x 00:15:37.321 13:39:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.321 [2024-07-10 13:39:16.588442] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.321 [2024-07-10 13:39:16.588477] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:37.321 13:39:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:37.580 [2024-07-10 13:39:16.760175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.580 [2024-07-10 13:39:16.760243] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.580 [2024-07-10 13:39:16.760252] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.580 [2024-07-10 13:39:16.760269] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.580 13:39:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:37.840 [2024-07-10 13:39:16.965890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.840 BaseBdev1 00:15:37.840 13:39:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:37.840 13:39:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:37.840 13:39:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:37.840 13:39:16 -- common/autotest_common.sh@889 -- # local i 00:15:37.840 13:39:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:37.840 13:39:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:37.840 13:39:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.840 13:39:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.100 [ 00:15:38.100 { 00:15:38.100 "name": "BaseBdev1", 00:15:38.100 "aliases": [ 00:15:38.100 "d9d38bed-e536-40f0-bd4a-ebe27b8330e6" 00:15:38.100 ], 00:15:38.100 "product_name": "Malloc disk", 00:15:38.100 "block_size": 512, 00:15:38.100 "num_blocks": 65536, 00:15:38.100 "uuid": "d9d38bed-e536-40f0-bd4a-ebe27b8330e6", 00:15:38.100 "assigned_rate_limits": { 00:15:38.100 "rw_ios_per_sec": 0, 00:15:38.100 "rw_mbytes_per_sec": 0, 00:15:38.100 "r_mbytes_per_sec": 0, 00:15:38.100 "w_mbytes_per_sec": 0 00:15:38.100 }, 00:15:38.100 "claimed": true, 00:15:38.100 "claim_type": "exclusive_write", 00:15:38.100 "zoned": false, 00:15:38.100 "supported_io_types": { 00:15:38.100 "read": true, 00:15:38.100 "write": true, 00:15:38.100 "unmap": true, 00:15:38.100 "write_zeroes": true, 00:15:38.100 "flush": true, 00:15:38.100 "reset": true, 00:15:38.100 "compare": false, 00:15:38.100 "compare_and_write": false, 00:15:38.100 "abort": true, 00:15:38.100 "nvme_admin": false, 00:15:38.100 "nvme_io": false 00:15:38.100 }, 00:15:38.100 "memory_domains": [ 00:15:38.100 { 00:15:38.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.100 "dma_device_type": 2 00:15:38.100 } 00:15:38.100 ], 00:15:38.100 "driver_specific": {} 00:15:38.100 } 00:15:38.100 ] 00:15:38.100 13:39:17 -- common/autotest_common.sh@895 -- # return 0 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.100 13:39:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.360 13:39:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.360 "name": "Existed_Raid", 00:15:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.360 "strip_size_kb": 64, 00:15:38.360 "state": "configuring", 00:15:38.360 "raid_level": "concat", 00:15:38.360 "superblock": false, 00:15:38.360 "num_base_bdevs": 2, 00:15:38.360 "num_base_bdevs_discovered": 1, 00:15:38.360 "num_base_bdevs_operational": 2, 00:15:38.360 "base_bdevs_list": [ 00:15:38.360 { 00:15:38.360 "name": "BaseBdev1", 00:15:38.360 "uuid": "d9d38bed-e536-40f0-bd4a-ebe27b8330e6", 00:15:38.360 "is_configured": true, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 65536 00:15:38.360 }, 00:15:38.360 { 00:15:38.360 "name": "BaseBdev2", 00:15:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.360 "is_configured": false, 00:15:38.360 "data_offset": 0, 00:15:38.360 "data_size": 0 00:15:38.360 } 00:15:38.360 ] 00:15:38.360 }' 00:15:38.360 13:39:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.360 13:39:17 -- common/autotest_common.sh@10 -- # set +x 00:15:38.928 13:39:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:38.928 [2024-07-10 13:39:18.207712] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.928 [2024-07-10 13:39:18.207769] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:38.928 13:39:18 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:38.928 13:39:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:39.187 [2024-07-10 13:39:18.387434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.187 [2024-07-10 13:39:18.389118] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.187 [2024-07-10 13:39:18.389171] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.187 13:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.447 13:39:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.447 "name": "Existed_Raid", 00:15:39.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.447 "strip_size_kb": 64, 00:15:39.447 "state": "configuring", 00:15:39.447 "raid_level": "concat", 00:15:39.447 "superblock": false, 00:15:39.447 "num_base_bdevs": 2, 00:15:39.447 "num_base_bdevs_discovered": 1, 00:15:39.447 "num_base_bdevs_operational": 2, 00:15:39.447 "base_bdevs_list": [ 00:15:39.447 { 00:15:39.447 "name": "BaseBdev1", 00:15:39.447 "uuid": "d9d38bed-e536-40f0-bd4a-ebe27b8330e6", 00:15:39.447 "is_configured": true, 00:15:39.447 "data_offset": 0, 00:15:39.447 "data_size": 65536 00:15:39.447 }, 00:15:39.447 { 00:15:39.447 "name": "BaseBdev2", 00:15:39.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.447 "is_configured": false, 00:15:39.447 "data_offset": 0, 00:15:39.447 "data_size": 0 00:15:39.447 } 00:15:39.447 ] 00:15:39.447 }' 00:15:39.447 13:39:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.447 13:39:18 -- common/autotest_common.sh@10 -- # set +x 00:15:40.015 13:39:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:40.275 [2024-07-10 13:39:19.398735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.275 [2024-07-10 13:39:19.398787] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:40.275 [2024-07-10 13:39:19.398803] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:40.275 [2024-07-10 13:39:19.398970] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:40.275 [2024-07-10 13:39:19.399243] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:40.275 [2024-07-10 13:39:19.399261] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:40.275 [2024-07-10 13:39:19.399527] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.275 BaseBdev2 00:15:40.275 13:39:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:40.275 13:39:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:40.275 13:39:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:40.275 13:39:19 -- common/autotest_common.sh@889 -- # local i 00:15:40.275 13:39:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:40.275 13:39:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:40.275 13:39:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.275 13:39:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.534 [ 00:15:40.534 { 00:15:40.534 "name": "BaseBdev2", 00:15:40.534 "aliases": [ 00:15:40.534 "933f681c-cfce-4d94-ae11-f5b46fb8831f" 00:15:40.534 ], 00:15:40.534 "product_name": "Malloc disk", 00:15:40.534 "block_size": 512, 00:15:40.534 "num_blocks": 65536, 00:15:40.534 "uuid": "933f681c-cfce-4d94-ae11-f5b46fb8831f", 00:15:40.534 "assigned_rate_limits": { 00:15:40.534 "rw_ios_per_sec": 0, 00:15:40.534 "rw_mbytes_per_sec": 0, 00:15:40.534 "r_mbytes_per_sec": 0, 00:15:40.534 "w_mbytes_per_sec": 0 00:15:40.534 }, 00:15:40.534 "claimed": true, 00:15:40.534 "claim_type": "exclusive_write", 00:15:40.534 "zoned": false, 00:15:40.534 "supported_io_types": { 00:15:40.534 "read": true, 00:15:40.534 "write": true, 00:15:40.534 "unmap": true, 00:15:40.534 "write_zeroes": true, 00:15:40.534 "flush": true, 00:15:40.534 "reset": true, 00:15:40.534 "compare": false, 00:15:40.534 "compare_and_write": false, 00:15:40.534 "abort": true, 00:15:40.534 "nvme_admin": false, 00:15:40.534 "nvme_io": false 00:15:40.534 }, 00:15:40.534 "memory_domains": [ 00:15:40.534 { 00:15:40.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.534 "dma_device_type": 2 00:15:40.534 } 00:15:40.534 ], 00:15:40.534 "driver_specific": {} 00:15:40.534 } 00:15:40.534 ] 00:15:40.534 13:39:19 -- common/autotest_common.sh@895 -- # return 0 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.534 13:39:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.793 13:39:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.793 "name": "Existed_Raid", 00:15:40.793 "uuid": "aa800dae-eada-409c-847c-404062582b8b", 00:15:40.793 "strip_size_kb": 64, 00:15:40.793 "state": "online", 00:15:40.793 "raid_level": "concat", 00:15:40.793 "superblock": false, 00:15:40.793 "num_base_bdevs": 2, 00:15:40.794 "num_base_bdevs_discovered": 2, 00:15:40.794 "num_base_bdevs_operational": 2, 00:15:40.794 "base_bdevs_list": [ 00:15:40.794 { 00:15:40.794 "name": "BaseBdev1", 00:15:40.794 "uuid": "d9d38bed-e536-40f0-bd4a-ebe27b8330e6", 00:15:40.794 "is_configured": true, 00:15:40.794 "data_offset": 0, 00:15:40.794 "data_size": 65536 00:15:40.794 }, 00:15:40.794 { 00:15:40.794 "name": "BaseBdev2", 00:15:40.794 "uuid": "933f681c-cfce-4d94-ae11-f5b46fb8831f", 00:15:40.794 "is_configured": true, 00:15:40.794 "data_offset": 0, 00:15:40.794 "data_size": 65536 00:15:40.794 } 00:15:40.794 ] 00:15:40.794 }' 00:15:40.794 13:39:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.794 13:39:19 -- common/autotest_common.sh@10 -- # set +x 00:15:41.363 13:39:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.621 [2024-07-10 13:39:20.748454] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.621 [2024-07-10 13:39:20.748490] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.621 [2024-07-10 13:39:20.748560] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.621 13:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.880 13:39:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.880 "name": "Existed_Raid", 00:15:41.880 "uuid": "aa800dae-eada-409c-847c-404062582b8b", 00:15:41.880 "strip_size_kb": 64, 00:15:41.880 "state": "offline", 00:15:41.880 "raid_level": "concat", 00:15:41.880 "superblock": false, 00:15:41.880 "num_base_bdevs": 2, 00:15:41.880 "num_base_bdevs_discovered": 1, 00:15:41.880 "num_base_bdevs_operational": 1, 00:15:41.880 "base_bdevs_list": [ 00:15:41.880 { 00:15:41.880 "name": null, 00:15:41.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.880 "is_configured": false, 00:15:41.880 "data_offset": 0, 00:15:41.880 "data_size": 65536 00:15:41.880 }, 00:15:41.880 { 00:15:41.880 "name": "BaseBdev2", 00:15:41.880 "uuid": "933f681c-cfce-4d94-ae11-f5b46fb8831f", 00:15:41.880 "is_configured": true, 00:15:41.880 "data_offset": 0, 00:15:41.880 "data_size": 65536 00:15:41.880 } 00:15:41.880 ] 00:15:41.880 }' 00:15:41.880 13:39:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.880 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.449 13:39:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:42.449 13:39:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.449 13:39:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.449 13:39:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:42.708 13:39:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:42.708 13:39:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.708 13:39:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:42.708 [2024-07-10 13:39:22.013836] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.708 [2024-07-10 13:39:22.013900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:42.968 13:39:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:42.968 13:39:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.968 13:39:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.968 13:39:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.227 13:39:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:43.227 13:39:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:43.227 13:39:22 -- bdev/bdev_raid.sh@287 -- # killprocess 116102 00:15:43.227 13:39:22 -- common/autotest_common.sh@926 -- # '[' -z 116102 ']' 00:15:43.227 13:39:22 -- common/autotest_common.sh@930 -- # kill -0 116102 00:15:43.227 13:39:22 -- common/autotest_common.sh@931 -- # uname 00:15:43.227 13:39:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:43.227 13:39:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116102 00:15:43.227 13:39:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:43.227 13:39:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:43.227 13:39:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116102' 00:15:43.227 killing process with pid 116102 00:15:43.228 13:39:22 -- common/autotest_common.sh@945 -- # kill 116102 00:15:43.228 13:39:22 -- common/autotest_common.sh@950 -- # wait 116102 00:15:43.228 [2024-07-10 13:39:22.357249] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.228 [2024-07-10 13:39:22.357373] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:44.645 00:15:44.645 real 0m9.014s 00:15:44.645 user 0m15.398s 00:15:44.645 sys 0m0.973s 00:15:44.645 13:39:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.645 13:39:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.645 ************************************ 00:15:44.645 END TEST raid_state_function_test 00:15:44.645 ************************************ 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:44.645 13:39:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:44.645 13:39:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.645 13:39:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.645 ************************************ 00:15:44.645 START TEST raid_state_function_test_sb 00:15:44.645 ************************************ 00:15:44.645 13:39:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=116418 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116418' 00:15:44.645 Process raid pid: 116418 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:44.645 13:39:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116418 /var/tmp/spdk-raid.sock 00:15:44.645 13:39:23 -- common/autotest_common.sh@819 -- # '[' -z 116418 ']' 00:15:44.645 13:39:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:44.645 13:39:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:44.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:44.645 13:39:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:44.645 13:39:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:44.645 13:39:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.645 [2024-07-10 13:39:23.753094] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:44.645 [2024-07-10 13:39:23.753229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.645 [2024-07-10 13:39:23.914431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.904 [2024-07-10 13:39:24.099798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.163 [2024-07-10 13:39:24.304816] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.422 13:39:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:45.422 13:39:24 -- common/autotest_common.sh@852 -- # return 0 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:45.422 [2024-07-10 13:39:24.718646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.422 [2024-07-10 13:39:24.718722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.422 [2024-07-10 13:39:24.718732] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.422 [2024-07-10 13:39:24.718745] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.422 13:39:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.681 13:39:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.681 "name": "Existed_Raid", 00:15:45.681 "uuid": "83f65838-f89a-4740-a8ae-f4f22e6ce1db", 00:15:45.681 "strip_size_kb": 64, 00:15:45.681 "state": "configuring", 00:15:45.681 "raid_level": "concat", 00:15:45.681 "superblock": true, 00:15:45.681 "num_base_bdevs": 2, 00:15:45.681 "num_base_bdevs_discovered": 0, 00:15:45.681 "num_base_bdevs_operational": 2, 00:15:45.681 "base_bdevs_list": [ 00:15:45.681 { 00:15:45.681 "name": "BaseBdev1", 00:15:45.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.681 "is_configured": false, 00:15:45.681 "data_offset": 0, 00:15:45.681 "data_size": 0 00:15:45.681 }, 00:15:45.681 { 00:15:45.681 "name": "BaseBdev2", 00:15:45.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.681 "is_configured": false, 00:15:45.681 "data_offset": 0, 00:15:45.681 "data_size": 0 00:15:45.681 } 00:15:45.681 ] 00:15:45.681 }' 00:15:45.681 13:39:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.681 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:15:46.248 13:39:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.507 [2024-07-10 13:39:25.712776] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.507 [2024-07-10 13:39:25.712822] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:46.507 13:39:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:46.765 [2024-07-10 13:39:25.876502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.765 [2024-07-10 13:39:25.876570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.765 [2024-07-10 13:39:25.876578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.765 [2024-07-10 13:39:25.876596] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.765 13:39:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.765 [2024-07-10 13:39:26.096754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.765 BaseBdev1 00:15:46.765 13:39:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:46.765 13:39:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:46.765 13:39:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:46.765 13:39:26 -- common/autotest_common.sh@889 -- # local i 00:15:46.765 13:39:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:46.765 13:39:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:46.765 13:39:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.024 13:39:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.282 [ 00:15:47.282 { 00:15:47.282 "name": "BaseBdev1", 00:15:47.282 "aliases": [ 00:15:47.282 "a57f56d9-3de4-4f11-ad0f-8624c4a92d7e" 00:15:47.282 ], 00:15:47.282 "product_name": "Malloc disk", 00:15:47.282 "block_size": 512, 00:15:47.282 "num_blocks": 65536, 00:15:47.282 "uuid": "a57f56d9-3de4-4f11-ad0f-8624c4a92d7e", 00:15:47.282 "assigned_rate_limits": { 00:15:47.282 "rw_ios_per_sec": 0, 00:15:47.282 "rw_mbytes_per_sec": 0, 00:15:47.282 "r_mbytes_per_sec": 0, 00:15:47.282 "w_mbytes_per_sec": 0 00:15:47.282 }, 00:15:47.282 "claimed": true, 00:15:47.282 "claim_type": "exclusive_write", 00:15:47.282 "zoned": false, 00:15:47.282 "supported_io_types": { 00:15:47.282 "read": true, 00:15:47.282 "write": true, 00:15:47.282 "unmap": true, 00:15:47.282 "write_zeroes": true, 00:15:47.282 "flush": true, 00:15:47.282 "reset": true, 00:15:47.282 "compare": false, 00:15:47.282 "compare_and_write": false, 00:15:47.282 "abort": true, 00:15:47.282 "nvme_admin": false, 00:15:47.282 "nvme_io": false 00:15:47.282 }, 00:15:47.282 "memory_domains": [ 00:15:47.282 { 00:15:47.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.282 "dma_device_type": 2 00:15:47.283 } 00:15:47.283 ], 00:15:47.283 "driver_specific": {} 00:15:47.283 } 00:15:47.283 ] 00:15:47.283 13:39:26 -- common/autotest_common.sh@895 -- # return 0 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.283 13:39:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.541 13:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.541 "name": "Existed_Raid", 00:15:47.541 "uuid": "1e76da34-3c8e-4dcd-9cfa-261f72f140e8", 00:15:47.541 "strip_size_kb": 64, 00:15:47.541 "state": "configuring", 00:15:47.541 "raid_level": "concat", 00:15:47.541 "superblock": true, 00:15:47.541 "num_base_bdevs": 2, 00:15:47.541 "num_base_bdevs_discovered": 1, 00:15:47.541 "num_base_bdevs_operational": 2, 00:15:47.541 "base_bdevs_list": [ 00:15:47.541 { 00:15:47.541 "name": "BaseBdev1", 00:15:47.541 "uuid": "a57f56d9-3de4-4f11-ad0f-8624c4a92d7e", 00:15:47.541 "is_configured": true, 00:15:47.541 "data_offset": 2048, 00:15:47.541 "data_size": 63488 00:15:47.541 }, 00:15:47.541 { 00:15:47.541 "name": "BaseBdev2", 00:15:47.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.541 "is_configured": false, 00:15:47.541 "data_offset": 0, 00:15:47.541 "data_size": 0 00:15:47.541 } 00:15:47.541 ] 00:15:47.541 }' 00:15:47.541 13:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.541 13:39:26 -- common/autotest_common.sh@10 -- # set +x 00:15:48.108 13:39:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.108 [2024-07-10 13:39:27.458503] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.108 [2024-07-10 13:39:27.458556] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:48.367 13:39:27 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:48.367 13:39:27 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:48.626 13:39:27 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.626 BaseBdev1 00:15:48.626 13:39:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:48.626 13:39:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:48.626 13:39:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.626 13:39:27 -- common/autotest_common.sh@889 -- # local i 00:15:48.626 13:39:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.626 13:39:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.626 13:39:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.885 13:39:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.143 [ 00:15:49.143 { 00:15:49.143 "name": "BaseBdev1", 00:15:49.143 "aliases": [ 00:15:49.143 "5084fa47-6e4f-4f15-8250-0ebd1f60e8ab" 00:15:49.143 ], 00:15:49.143 "product_name": "Malloc disk", 00:15:49.143 "block_size": 512, 00:15:49.144 "num_blocks": 65536, 00:15:49.144 "uuid": "5084fa47-6e4f-4f15-8250-0ebd1f60e8ab", 00:15:49.144 "assigned_rate_limits": { 00:15:49.144 "rw_ios_per_sec": 0, 00:15:49.144 "rw_mbytes_per_sec": 0, 00:15:49.144 "r_mbytes_per_sec": 0, 00:15:49.144 "w_mbytes_per_sec": 0 00:15:49.144 }, 00:15:49.144 "claimed": false, 00:15:49.144 "zoned": false, 00:15:49.144 "supported_io_types": { 00:15:49.144 "read": true, 00:15:49.144 "write": true, 00:15:49.144 "unmap": true, 00:15:49.144 "write_zeroes": true, 00:15:49.144 "flush": true, 00:15:49.144 "reset": true, 00:15:49.144 "compare": false, 00:15:49.144 "compare_and_write": false, 00:15:49.144 "abort": true, 00:15:49.144 "nvme_admin": false, 00:15:49.144 "nvme_io": false 00:15:49.144 }, 00:15:49.144 "memory_domains": [ 00:15:49.144 { 00:15:49.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.144 "dma_device_type": 2 00:15:49.144 } 00:15:49.144 ], 00:15:49.144 "driver_specific": {} 00:15:49.144 } 00:15:49.144 ] 00:15:49.144 13:39:28 -- common/autotest_common.sh@895 -- # return 0 00:15:49.144 13:39:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:49.144 [2024-07-10 13:39:28.495991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.144 [2024-07-10 13:39:28.497640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.144 [2024-07-10 13:39:28.497734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.403 "name": "Existed_Raid", 00:15:49.403 "uuid": "f0d3248e-4fd3-4265-b5e9-98cdbf631b67", 00:15:49.403 "strip_size_kb": 64, 00:15:49.403 "state": "configuring", 00:15:49.403 "raid_level": "concat", 00:15:49.403 "superblock": true, 00:15:49.403 "num_base_bdevs": 2, 00:15:49.403 "num_base_bdevs_discovered": 1, 00:15:49.403 "num_base_bdevs_operational": 2, 00:15:49.403 "base_bdevs_list": [ 00:15:49.403 { 00:15:49.403 "name": "BaseBdev1", 00:15:49.403 "uuid": "5084fa47-6e4f-4f15-8250-0ebd1f60e8ab", 00:15:49.403 "is_configured": true, 00:15:49.403 "data_offset": 2048, 00:15:49.403 "data_size": 63488 00:15:49.403 }, 00:15:49.403 { 00:15:49.403 "name": "BaseBdev2", 00:15:49.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.403 "is_configured": false, 00:15:49.403 "data_offset": 0, 00:15:49.403 "data_size": 0 00:15:49.403 } 00:15:49.403 ] 00:15:49.403 }' 00:15:49.403 13:39:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.403 13:39:28 -- common/autotest_common.sh@10 -- # set +x 00:15:50.031 13:39:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.290 [2024-07-10 13:39:29.526206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.290 [2024-07-10 13:39:29.526393] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:50.290 [2024-07-10 13:39:29.526404] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:50.290 [2024-07-10 13:39:29.526523] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:50.290 BaseBdev2 00:15:50.290 [2024-07-10 13:39:29.526820] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:50.290 [2024-07-10 13:39:29.526843] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:50.290 [2024-07-10 13:39:29.526968] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.290 13:39:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:50.290 13:39:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:50.290 13:39:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:50.290 13:39:29 -- common/autotest_common.sh@889 -- # local i 00:15:50.290 13:39:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:50.290 13:39:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:50.290 13:39:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.550 13:39:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.550 [ 00:15:50.550 { 00:15:50.550 "name": "BaseBdev2", 00:15:50.550 "aliases": [ 00:15:50.550 "01eb19dd-f36a-4066-a215-3116ef489270" 00:15:50.550 ], 00:15:50.550 "product_name": "Malloc disk", 00:15:50.550 "block_size": 512, 00:15:50.550 "num_blocks": 65536, 00:15:50.550 "uuid": "01eb19dd-f36a-4066-a215-3116ef489270", 00:15:50.550 "assigned_rate_limits": { 00:15:50.550 "rw_ios_per_sec": 0, 00:15:50.550 "rw_mbytes_per_sec": 0, 00:15:50.550 "r_mbytes_per_sec": 0, 00:15:50.550 "w_mbytes_per_sec": 0 00:15:50.550 }, 00:15:50.550 "claimed": true, 00:15:50.550 "claim_type": "exclusive_write", 00:15:50.550 "zoned": false, 00:15:50.550 "supported_io_types": { 00:15:50.550 "read": true, 00:15:50.550 "write": true, 00:15:50.550 "unmap": true, 00:15:50.550 "write_zeroes": true, 00:15:50.550 "flush": true, 00:15:50.550 "reset": true, 00:15:50.550 "compare": false, 00:15:50.550 "compare_and_write": false, 00:15:50.550 "abort": true, 00:15:50.550 "nvme_admin": false, 00:15:50.550 "nvme_io": false 00:15:50.550 }, 00:15:50.550 "memory_domains": [ 00:15:50.550 { 00:15:50.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.550 "dma_device_type": 2 00:15:50.550 } 00:15:50.550 ], 00:15:50.550 "driver_specific": {} 00:15:50.550 } 00:15:50.550 ] 00:15:50.550 13:39:29 -- common/autotest_common.sh@895 -- # return 0 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.550 13:39:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.809 13:39:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.809 "name": "Existed_Raid", 00:15:50.809 "uuid": "f0d3248e-4fd3-4265-b5e9-98cdbf631b67", 00:15:50.809 "strip_size_kb": 64, 00:15:50.809 "state": "online", 00:15:50.809 "raid_level": "concat", 00:15:50.809 "superblock": true, 00:15:50.809 "num_base_bdevs": 2, 00:15:50.809 "num_base_bdevs_discovered": 2, 00:15:50.809 "num_base_bdevs_operational": 2, 00:15:50.810 "base_bdevs_list": [ 00:15:50.810 { 00:15:50.810 "name": "BaseBdev1", 00:15:50.810 "uuid": "5084fa47-6e4f-4f15-8250-0ebd1f60e8ab", 00:15:50.810 "is_configured": true, 00:15:50.810 "data_offset": 2048, 00:15:50.810 "data_size": 63488 00:15:50.810 }, 00:15:50.810 { 00:15:50.810 "name": "BaseBdev2", 00:15:50.810 "uuid": "01eb19dd-f36a-4066-a215-3116ef489270", 00:15:50.810 "is_configured": true, 00:15:50.810 "data_offset": 2048, 00:15:50.810 "data_size": 63488 00:15:50.810 } 00:15:50.810 ] 00:15:50.810 }' 00:15:50.810 13:39:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.810 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:15:51.379 13:39:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:51.638 [2024-07-10 13:39:30.883869] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.638 [2024-07-10 13:39:30.883906] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.638 [2024-07-10 13:39:30.883964] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.898 "name": "Existed_Raid", 00:15:51.898 "uuid": "f0d3248e-4fd3-4265-b5e9-98cdbf631b67", 00:15:51.898 "strip_size_kb": 64, 00:15:51.898 "state": "offline", 00:15:51.898 "raid_level": "concat", 00:15:51.898 "superblock": true, 00:15:51.898 "num_base_bdevs": 2, 00:15:51.898 "num_base_bdevs_discovered": 1, 00:15:51.898 "num_base_bdevs_operational": 1, 00:15:51.898 "base_bdevs_list": [ 00:15:51.898 { 00:15:51.898 "name": null, 00:15:51.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.898 "is_configured": false, 00:15:51.898 "data_offset": 2048, 00:15:51.898 "data_size": 63488 00:15:51.898 }, 00:15:51.898 { 00:15:51.898 "name": "BaseBdev2", 00:15:51.898 "uuid": "01eb19dd-f36a-4066-a215-3116ef489270", 00:15:51.898 "is_configured": true, 00:15:51.898 "data_offset": 2048, 00:15:51.898 "data_size": 63488 00:15:51.898 } 00:15:51.898 ] 00:15:51.898 }' 00:15:51.898 13:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.898 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:15:52.467 13:39:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:52.467 13:39:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:52.467 13:39:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.467 13:39:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:52.726 13:39:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:52.726 13:39:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.726 13:39:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:52.986 [2024-07-10 13:39:32.174150] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.986 [2024-07-10 13:39:32.174217] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:52.986 13:39:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:52.986 13:39:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:52.986 13:39:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.986 13:39:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.246 13:39:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:53.246 13:39:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:53.246 13:39:32 -- bdev/bdev_raid.sh@287 -- # killprocess 116418 00:15:53.246 13:39:32 -- common/autotest_common.sh@926 -- # '[' -z 116418 ']' 00:15:53.246 13:39:32 -- common/autotest_common.sh@930 -- # kill -0 116418 00:15:53.246 13:39:32 -- common/autotest_common.sh@931 -- # uname 00:15:53.246 13:39:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.246 13:39:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116418 00:15:53.246 killing process with pid 116418 00:15:53.246 13:39:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.246 13:39:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.246 13:39:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116418' 00:15:53.246 13:39:32 -- common/autotest_common.sh@945 -- # kill 116418 00:15:53.246 13:39:32 -- common/autotest_common.sh@950 -- # wait 116418 00:15:53.246 [2024-07-10 13:39:32.474242] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.246 [2024-07-10 13:39:32.474359] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.623 ************************************ 00:15:54.623 END TEST raid_state_function_test_sb 00:15:54.623 ************************************ 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:54.623 00:15:54.623 real 0m10.050s 00:15:54.623 user 0m17.065s 00:15:54.623 sys 0m1.242s 00:15:54.623 13:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.623 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:54.623 13:39:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:54.623 13:39:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:54.623 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.623 ************************************ 00:15:54.623 START TEST raid_superblock_test 00:15:54.623 ************************************ 00:15:54.623 13:39:33 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=116759 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116759 /var/tmp/spdk-raid.sock 00:15:54.623 13:39:33 -- common/autotest_common.sh@819 -- # '[' -z 116759 ']' 00:15:54.623 13:39:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:54.623 13:39:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:54.623 13:39:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:54.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:54.623 13:39:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:54.623 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.623 13:39:33 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:54.623 [2024-07-10 13:39:33.855458] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:54.623 [2024-07-10 13:39:33.855591] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116759 ] 00:15:54.882 [2024-07-10 13:39:34.013053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.882 [2024-07-10 13:39:34.201458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.142 [2024-07-10 13:39:34.393388] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.401 13:39:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.401 13:39:34 -- common/autotest_common.sh@852 -- # return 0 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.401 13:39:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:55.661 malloc1 00:15:55.661 13:39:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.950 [2024-07-10 13:39:35.050979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.950 [2024-07-10 13:39:35.051058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.950 [2024-07-10 13:39:35.051119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:55.950 [2024-07-10 13:39:35.051155] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.950 [2024-07-10 13:39:35.053161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.950 [2024-07-10 13:39:35.053208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.950 pt1 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.950 13:39:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:55.950 malloc2 00:15:56.210 13:39:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.210 [2024-07-10 13:39:35.486879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.210 [2024-07-10 13:39:35.486968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.210 [2024-07-10 13:39:35.487004] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:56.210 [2024-07-10 13:39:35.487047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.210 [2024-07-10 13:39:35.489036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.210 [2024-07-10 13:39:35.489085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.210 pt2 00:15:56.210 13:39:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:56.210 13:39:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:56.210 13:39:35 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:56.469 [2024-07-10 13:39:35.682603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.469 [2024-07-10 13:39:35.684391] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.469 [2024-07-10 13:39:35.684565] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:56.469 [2024-07-10 13:39:35.684577] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:56.469 [2024-07-10 13:39:35.684729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:56.469 [2024-07-10 13:39:35.685060] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:56.469 [2024-07-10 13:39:35.685079] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:56.469 [2024-07-10 13:39:35.685231] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.470 13:39:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.729 13:39:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.729 "name": "raid_bdev1", 00:15:56.729 "uuid": "5db4820b-b312-4a9f-9f2a-dcf1fd61ec54", 00:15:56.729 "strip_size_kb": 64, 00:15:56.729 "state": "online", 00:15:56.729 "raid_level": "concat", 00:15:56.729 "superblock": true, 00:15:56.729 "num_base_bdevs": 2, 00:15:56.729 "num_base_bdevs_discovered": 2, 00:15:56.729 "num_base_bdevs_operational": 2, 00:15:56.729 "base_bdevs_list": [ 00:15:56.729 { 00:15:56.729 "name": "pt1", 00:15:56.729 "uuid": "3ab65415-97d4-5f8a-b1f6-4f0963ae03f6", 00:15:56.729 "is_configured": true, 00:15:56.729 "data_offset": 2048, 00:15:56.729 "data_size": 63488 00:15:56.729 }, 00:15:56.729 { 00:15:56.729 "name": "pt2", 00:15:56.729 "uuid": "f1bbc0fa-fd35-58af-9c39-001bfa5138d1", 00:15:56.729 "is_configured": true, 00:15:56.729 "data_offset": 2048, 00:15:56.729 "data_size": 63488 00:15:56.729 } 00:15:56.729 ] 00:15:56.729 }' 00:15:56.729 13:39:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.729 13:39:35 -- common/autotest_common.sh@10 -- # set +x 00:15:57.297 13:39:36 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:57.297 13:39:36 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:57.297 [2024-07-10 13:39:36.628999] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.297 13:39:36 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5db4820b-b312-4a9f-9f2a-dcf1fd61ec54 00:15:57.297 13:39:36 -- bdev/bdev_raid.sh@380 -- # '[' -z 5db4820b-b312-4a9f-9f2a-dcf1fd61ec54 ']' 00:15:57.297 13:39:36 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:57.556 [2024-07-10 13:39:36.820460] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.556 [2024-07-10 13:39:36.820554] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.556 [2024-07-10 13:39:36.820657] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.556 [2024-07-10 13:39:36.820721] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.556 [2024-07-10 13:39:36.820738] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:57.556 13:39:36 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.556 13:39:36 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:57.815 13:39:37 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:57.815 13:39:37 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:57.815 13:39:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.815 13:39:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:58.074 13:39:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.074 13:39:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:58.074 13:39:37 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:58.074 13:39:37 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:58.334 13:39:37 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:58.334 13:39:37 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:58.334 13:39:37 -- common/autotest_common.sh@640 -- # local es=0 00:15:58.334 13:39:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:58.334 13:39:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.334 13:39:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:58.334 13:39:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.334 13:39:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:58.334 13:39:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.334 13:39:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:58.334 13:39:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.334 13:39:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:58.334 13:39:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:58.594 [2024-07-10 13:39:37.770769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:58.594 [2024-07-10 13:39:37.772548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:58.594 [2024-07-10 13:39:37.772673] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:58.594 [2024-07-10 13:39:37.772759] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:58.594 [2024-07-10 13:39:37.772799] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.594 [2024-07-10 13:39:37.772819] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:58.594 request: 00:15:58.594 { 00:15:58.594 "name": "raid_bdev1", 00:15:58.594 "raid_level": "concat", 00:15:58.594 "base_bdevs": [ 00:15:58.594 "malloc1", 00:15:58.594 "malloc2" 00:15:58.594 ], 00:15:58.594 "superblock": false, 00:15:58.594 "strip_size_kb": 64, 00:15:58.594 "method": "bdev_raid_create", 00:15:58.594 "req_id": 1 00:15:58.594 } 00:15:58.594 Got JSON-RPC error response 00:15:58.594 response: 00:15:58.594 { 00:15:58.594 "code": -17, 00:15:58.594 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:58.594 } 00:15:58.594 13:39:37 -- common/autotest_common.sh@643 -- # es=1 00:15:58.594 13:39:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:58.594 13:39:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:58.594 13:39:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:58.594 13:39:37 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.594 13:39:37 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:58.854 13:39:37 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:58.854 13:39:37 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:58.854 13:39:37 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.854 [2024-07-10 13:39:38.134054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.854 [2024-07-10 13:39:38.134193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.854 [2024-07-10 13:39:38.134237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:58.854 [2024-07-10 13:39:38.134303] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.854 [2024-07-10 13:39:38.136311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.854 [2024-07-10 13:39:38.136392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.854 [2024-07-10 13:39:38.136515] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:58.854 [2024-07-10 13:39:38.136596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.854 pt1 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.854 13:39:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.114 13:39:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.114 "name": "raid_bdev1", 00:15:59.114 "uuid": "5db4820b-b312-4a9f-9f2a-dcf1fd61ec54", 00:15:59.114 "strip_size_kb": 64, 00:15:59.114 "state": "configuring", 00:15:59.114 "raid_level": "concat", 00:15:59.114 "superblock": true, 00:15:59.114 "num_base_bdevs": 2, 00:15:59.114 "num_base_bdevs_discovered": 1, 00:15:59.114 "num_base_bdevs_operational": 2, 00:15:59.114 "base_bdevs_list": [ 00:15:59.114 { 00:15:59.114 "name": "pt1", 00:15:59.114 "uuid": "3ab65415-97d4-5f8a-b1f6-4f0963ae03f6", 00:15:59.115 "is_configured": true, 00:15:59.115 "data_offset": 2048, 00:15:59.115 "data_size": 63488 00:15:59.115 }, 00:15:59.115 { 00:15:59.115 "name": null, 00:15:59.115 "uuid": "f1bbc0fa-fd35-58af-9c39-001bfa5138d1", 00:15:59.115 "is_configured": false, 00:15:59.115 "data_offset": 2048, 00:15:59.115 "data_size": 63488 00:15:59.115 } 00:15:59.115 ] 00:15:59.115 }' 00:15:59.115 13:39:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.115 13:39:38 -- common/autotest_common.sh@10 -- # set +x 00:15:59.683 13:39:38 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:59.683 13:39:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:59.683 13:39:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:59.683 13:39:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.942 [2024-07-10 13:39:39.212199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.942 [2024-07-10 13:39:39.212343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.942 [2024-07-10 13:39:39.212389] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:59.942 [2024-07-10 13:39:39.212432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.942 [2024-07-10 13:39:39.212874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.942 [2024-07-10 13:39:39.212935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.942 [2024-07-10 13:39:39.213067] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:59.942 [2024-07-10 13:39:39.213127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.942 [2024-07-10 13:39:39.213271] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:59.942 [2024-07-10 13:39:39.213308] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.942 [2024-07-10 13:39:39.213450] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:59.942 [2024-07-10 13:39:39.213730] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:59.942 [2024-07-10 13:39:39.213769] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:59.942 [2024-07-10 13:39:39.213913] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.942 pt2 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.942 13:39:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.201 13:39:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.201 "name": "raid_bdev1", 00:16:00.201 "uuid": "5db4820b-b312-4a9f-9f2a-dcf1fd61ec54", 00:16:00.201 "strip_size_kb": 64, 00:16:00.201 "state": "online", 00:16:00.201 "raid_level": "concat", 00:16:00.201 "superblock": true, 00:16:00.201 "num_base_bdevs": 2, 00:16:00.201 "num_base_bdevs_discovered": 2, 00:16:00.201 "num_base_bdevs_operational": 2, 00:16:00.201 "base_bdevs_list": [ 00:16:00.201 { 00:16:00.201 "name": "pt1", 00:16:00.201 "uuid": "3ab65415-97d4-5f8a-b1f6-4f0963ae03f6", 00:16:00.201 "is_configured": true, 00:16:00.201 "data_offset": 2048, 00:16:00.201 "data_size": 63488 00:16:00.201 }, 00:16:00.201 { 00:16:00.201 "name": "pt2", 00:16:00.201 "uuid": "f1bbc0fa-fd35-58af-9c39-001bfa5138d1", 00:16:00.201 "is_configured": true, 00:16:00.201 "data_offset": 2048, 00:16:00.201 "data_size": 63488 00:16:00.201 } 00:16:00.201 ] 00:16:00.201 }' 00:16:00.201 13:39:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.201 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:16:00.769 13:39:40 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:00.769 13:39:40 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:01.029 [2024-07-10 13:39:40.178701] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.029 13:39:40 -- bdev/bdev_raid.sh@430 -- # '[' 5db4820b-b312-4a9f-9f2a-dcf1fd61ec54 '!=' 5db4820b-b312-4a9f-9f2a-dcf1fd61ec54 ']' 00:16:01.029 13:39:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:01.029 13:39:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:01.029 13:39:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:01.029 13:39:40 -- bdev/bdev_raid.sh@511 -- # killprocess 116759 00:16:01.029 13:39:40 -- common/autotest_common.sh@926 -- # '[' -z 116759 ']' 00:16:01.029 13:39:40 -- common/autotest_common.sh@930 -- # kill -0 116759 00:16:01.029 13:39:40 -- common/autotest_common.sh@931 -- # uname 00:16:01.029 13:39:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:01.029 13:39:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116759 00:16:01.029 13:39:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:01.029 13:39:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:01.029 13:39:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116759' 00:16:01.029 killing process with pid 116759 00:16:01.029 13:39:40 -- common/autotest_common.sh@945 -- # kill 116759 00:16:01.029 [2024-07-10 13:39:40.221898] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.029 13:39:40 -- common/autotest_common.sh@950 -- # wait 116759 00:16:01.029 [2024-07-10 13:39:40.222000] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.029 [2024-07-10 13:39:40.222078] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.029 [2024-07-10 13:39:40.222103] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:01.288 [2024-07-10 13:39:40.414758] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:02.736 00:16:02.736 real 0m7.879s 00:16:02.736 user 0m13.097s 00:16:02.736 sys 0m0.969s 00:16:02.736 13:39:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.736 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:16:02.736 ************************************ 00:16:02.736 END TEST raid_superblock_test 00:16:02.736 ************************************ 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:02.736 13:39:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:02.736 13:39:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:02.736 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:16:02.736 ************************************ 00:16:02.736 START TEST raid_state_function_test 00:16:02.736 ************************************ 00:16:02.736 13:39:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:02.736 13:39:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=117002 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:02.737 Process raid pid: 117002 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117002' 00:16:02.737 13:39:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117002 /var/tmp/spdk-raid.sock 00:16:02.737 13:39:41 -- common/autotest_common.sh@819 -- # '[' -z 117002 ']' 00:16:02.737 13:39:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:02.737 13:39:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.737 13:39:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:02.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:02.737 13:39:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.737 13:39:41 -- common/autotest_common.sh@10 -- # set +x 00:16:02.737 [2024-07-10 13:39:41.809399] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:02.737 [2024-07-10 13:39:41.810018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.737 [2024-07-10 13:39:41.969252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.996 [2024-07-10 13:39:42.177608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.255 [2024-07-10 13:39:42.386682] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.514 13:39:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:03.515 13:39:42 -- common/autotest_common.sh@852 -- # return 0 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:03.515 [2024-07-10 13:39:42.773806] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.515 [2024-07-10 13:39:42.773915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.515 [2024-07-10 13:39:42.773943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.515 [2024-07-10 13:39:42.773968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.515 13:39:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.784 13:39:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.784 "name": "Existed_Raid", 00:16:03.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.784 "strip_size_kb": 0, 00:16:03.784 "state": "configuring", 00:16:03.784 "raid_level": "raid1", 00:16:03.784 "superblock": false, 00:16:03.784 "num_base_bdevs": 2, 00:16:03.784 "num_base_bdevs_discovered": 0, 00:16:03.784 "num_base_bdevs_operational": 2, 00:16:03.784 "base_bdevs_list": [ 00:16:03.784 { 00:16:03.784 "name": "BaseBdev1", 00:16:03.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.784 "is_configured": false, 00:16:03.784 "data_offset": 0, 00:16:03.784 "data_size": 0 00:16:03.784 }, 00:16:03.784 { 00:16:03.784 "name": "BaseBdev2", 00:16:03.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.784 "is_configured": false, 00:16:03.784 "data_offset": 0, 00:16:03.784 "data_size": 0 00:16:03.784 } 00:16:03.784 ] 00:16:03.784 }' 00:16:03.784 13:39:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.784 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:16:04.352 13:39:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:04.611 [2024-07-10 13:39:43.827877] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.611 [2024-07-10 13:39:43.827965] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:04.611 13:39:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:04.870 [2024-07-10 13:39:44.007579] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.870 [2024-07-10 13:39:44.007722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.870 [2024-07-10 13:39:44.007751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.870 [2024-07-10 13:39:44.007779] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.870 13:39:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.870 [2024-07-10 13:39:44.224417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.870 BaseBdev1 00:16:05.130 13:39:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:05.130 13:39:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:05.130 13:39:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:05.130 13:39:44 -- common/autotest_common.sh@889 -- # local i 00:16:05.130 13:39:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:05.130 13:39:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:05.130 13:39:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.130 13:39:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.388 [ 00:16:05.388 { 00:16:05.388 "name": "BaseBdev1", 00:16:05.388 "aliases": [ 00:16:05.388 "dd376597-d781-4721-bf1e-954ad7da41e1" 00:16:05.388 ], 00:16:05.388 "product_name": "Malloc disk", 00:16:05.388 "block_size": 512, 00:16:05.388 "num_blocks": 65536, 00:16:05.388 "uuid": "dd376597-d781-4721-bf1e-954ad7da41e1", 00:16:05.388 "assigned_rate_limits": { 00:16:05.388 "rw_ios_per_sec": 0, 00:16:05.388 "rw_mbytes_per_sec": 0, 00:16:05.388 "r_mbytes_per_sec": 0, 00:16:05.388 "w_mbytes_per_sec": 0 00:16:05.388 }, 00:16:05.388 "claimed": true, 00:16:05.388 "claim_type": "exclusive_write", 00:16:05.388 "zoned": false, 00:16:05.388 "supported_io_types": { 00:16:05.388 "read": true, 00:16:05.388 "write": true, 00:16:05.388 "unmap": true, 00:16:05.388 "write_zeroes": true, 00:16:05.388 "flush": true, 00:16:05.388 "reset": true, 00:16:05.388 "compare": false, 00:16:05.388 "compare_and_write": false, 00:16:05.388 "abort": true, 00:16:05.388 "nvme_admin": false, 00:16:05.388 "nvme_io": false 00:16:05.388 }, 00:16:05.388 "memory_domains": [ 00:16:05.388 { 00:16:05.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.388 "dma_device_type": 2 00:16:05.388 } 00:16:05.388 ], 00:16:05.388 "driver_specific": {} 00:16:05.388 } 00:16:05.388 ] 00:16:05.388 13:39:44 -- common/autotest_common.sh@895 -- # return 0 00:16:05.388 13:39:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:05.388 13:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.388 13:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.388 13:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.389 13:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.646 13:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.646 "name": "Existed_Raid", 00:16:05.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.646 "strip_size_kb": 0, 00:16:05.646 "state": "configuring", 00:16:05.646 "raid_level": "raid1", 00:16:05.646 "superblock": false, 00:16:05.646 "num_base_bdevs": 2, 00:16:05.646 "num_base_bdevs_discovered": 1, 00:16:05.646 "num_base_bdevs_operational": 2, 00:16:05.646 "base_bdevs_list": [ 00:16:05.646 { 00:16:05.646 "name": "BaseBdev1", 00:16:05.646 "uuid": "dd376597-d781-4721-bf1e-954ad7da41e1", 00:16:05.646 "is_configured": true, 00:16:05.646 "data_offset": 0, 00:16:05.646 "data_size": 65536 00:16:05.646 }, 00:16:05.646 { 00:16:05.646 "name": "BaseBdev2", 00:16:05.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.646 "is_configured": false, 00:16:05.646 "data_offset": 0, 00:16:05.646 "data_size": 0 00:16:05.646 } 00:16:05.646 ] 00:16:05.646 }' 00:16:05.646 13:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.646 13:39:44 -- common/autotest_common.sh@10 -- # set +x 00:16:06.211 13:39:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:06.469 [2024-07-10 13:39:45.606156] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.469 [2024-07-10 13:39:45.606299] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:06.469 [2024-07-10 13:39:45.805880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.469 [2024-07-10 13:39:45.807804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.469 [2024-07-10 13:39:45.807915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.469 13:39:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.728 13:39:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.728 "name": "Existed_Raid", 00:16:06.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.728 "strip_size_kb": 0, 00:16:06.728 "state": "configuring", 00:16:06.728 "raid_level": "raid1", 00:16:06.728 "superblock": false, 00:16:06.728 "num_base_bdevs": 2, 00:16:06.728 "num_base_bdevs_discovered": 1, 00:16:06.728 "num_base_bdevs_operational": 2, 00:16:06.728 "base_bdevs_list": [ 00:16:06.728 { 00:16:06.728 "name": "BaseBdev1", 00:16:06.728 "uuid": "dd376597-d781-4721-bf1e-954ad7da41e1", 00:16:06.728 "is_configured": true, 00:16:06.728 "data_offset": 0, 00:16:06.728 "data_size": 65536 00:16:06.728 }, 00:16:06.728 { 00:16:06.728 "name": "BaseBdev2", 00:16:06.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.728 "is_configured": false, 00:16:06.728 "data_offset": 0, 00:16:06.728 "data_size": 0 00:16:06.728 } 00:16:06.728 ] 00:16:06.728 }' 00:16:06.728 13:39:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.728 13:39:46 -- common/autotest_common.sh@10 -- # set +x 00:16:07.295 13:39:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:07.554 [2024-07-10 13:39:46.837251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.554 [2024-07-10 13:39:46.837381] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:07.554 [2024-07-10 13:39:46.837403] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:07.554 [2024-07-10 13:39:46.837539] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:16:07.554 [2024-07-10 13:39:46.837837] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:07.554 [2024-07-10 13:39:46.837877] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:07.554 [2024-07-10 13:39:46.838142] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.554 BaseBdev2 00:16:07.554 13:39:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:07.554 13:39:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:07.554 13:39:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:07.554 13:39:46 -- common/autotest_common.sh@889 -- # local i 00:16:07.554 13:39:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:07.554 13:39:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:07.554 13:39:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:07.813 13:39:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.072 [ 00:16:08.072 { 00:16:08.072 "name": "BaseBdev2", 00:16:08.072 "aliases": [ 00:16:08.072 "f88b51eb-6652-4dc6-be84-7a36c7094c92" 00:16:08.072 ], 00:16:08.072 "product_name": "Malloc disk", 00:16:08.072 "block_size": 512, 00:16:08.072 "num_blocks": 65536, 00:16:08.072 "uuid": "f88b51eb-6652-4dc6-be84-7a36c7094c92", 00:16:08.072 "assigned_rate_limits": { 00:16:08.072 "rw_ios_per_sec": 0, 00:16:08.072 "rw_mbytes_per_sec": 0, 00:16:08.072 "r_mbytes_per_sec": 0, 00:16:08.072 "w_mbytes_per_sec": 0 00:16:08.072 }, 00:16:08.072 "claimed": true, 00:16:08.072 "claim_type": "exclusive_write", 00:16:08.072 "zoned": false, 00:16:08.072 "supported_io_types": { 00:16:08.072 "read": true, 00:16:08.072 "write": true, 00:16:08.072 "unmap": true, 00:16:08.072 "write_zeroes": true, 00:16:08.072 "flush": true, 00:16:08.072 "reset": true, 00:16:08.072 "compare": false, 00:16:08.072 "compare_and_write": false, 00:16:08.072 "abort": true, 00:16:08.072 "nvme_admin": false, 00:16:08.072 "nvme_io": false 00:16:08.072 }, 00:16:08.072 "memory_domains": [ 00:16:08.072 { 00:16:08.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.072 "dma_device_type": 2 00:16:08.072 } 00:16:08.072 ], 00:16:08.072 "driver_specific": {} 00:16:08.072 } 00:16:08.072 ] 00:16:08.072 13:39:47 -- common/autotest_common.sh@895 -- # return 0 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.072 "name": "Existed_Raid", 00:16:08.072 "uuid": "fe87b63d-92b7-4bb6-bd57-12f69df84f2c", 00:16:08.072 "strip_size_kb": 0, 00:16:08.072 "state": "online", 00:16:08.072 "raid_level": "raid1", 00:16:08.072 "superblock": false, 00:16:08.072 "num_base_bdevs": 2, 00:16:08.072 "num_base_bdevs_discovered": 2, 00:16:08.072 "num_base_bdevs_operational": 2, 00:16:08.072 "base_bdevs_list": [ 00:16:08.072 { 00:16:08.072 "name": "BaseBdev1", 00:16:08.072 "uuid": "dd376597-d781-4721-bf1e-954ad7da41e1", 00:16:08.072 "is_configured": true, 00:16:08.072 "data_offset": 0, 00:16:08.072 "data_size": 65536 00:16:08.072 }, 00:16:08.072 { 00:16:08.072 "name": "BaseBdev2", 00:16:08.072 "uuid": "f88b51eb-6652-4dc6-be84-7a36c7094c92", 00:16:08.072 "is_configured": true, 00:16:08.072 "data_offset": 0, 00:16:08.072 "data_size": 65536 00:16:08.072 } 00:16:08.072 ] 00:16:08.072 }' 00:16:08.072 13:39:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.072 13:39:47 -- common/autotest_common.sh@10 -- # set +x 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.011 [2024-07-10 13:39:48.178937] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.011 13:39:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.271 13:39:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.271 "name": "Existed_Raid", 00:16:09.271 "uuid": "fe87b63d-92b7-4bb6-bd57-12f69df84f2c", 00:16:09.271 "strip_size_kb": 0, 00:16:09.271 "state": "online", 00:16:09.271 "raid_level": "raid1", 00:16:09.271 "superblock": false, 00:16:09.271 "num_base_bdevs": 2, 00:16:09.271 "num_base_bdevs_discovered": 1, 00:16:09.271 "num_base_bdevs_operational": 1, 00:16:09.271 "base_bdevs_list": [ 00:16:09.271 { 00:16:09.271 "name": null, 00:16:09.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.271 "is_configured": false, 00:16:09.271 "data_offset": 0, 00:16:09.271 "data_size": 65536 00:16:09.271 }, 00:16:09.271 { 00:16:09.271 "name": "BaseBdev2", 00:16:09.271 "uuid": "f88b51eb-6652-4dc6-be84-7a36c7094c92", 00:16:09.271 "is_configured": true, 00:16:09.271 "data_offset": 0, 00:16:09.271 "data_size": 65536 00:16:09.271 } 00:16:09.271 ] 00:16:09.271 }' 00:16:09.271 13:39:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.271 13:39:48 -- common/autotest_common.sh@10 -- # set +x 00:16:09.839 13:39:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:09.839 13:39:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.839 13:39:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.839 13:39:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:10.098 13:39:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:10.098 13:39:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.098 13:39:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:10.098 [2024-07-10 13:39:49.440436] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.098 [2024-07-10 13:39:49.440555] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.098 [2024-07-10 13:39:49.440651] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.361 [2024-07-10 13:39:49.546762] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.361 [2024-07-10 13:39:49.546852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:10.361 13:39:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:10.361 13:39:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:10.361 13:39:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.361 13:39:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:10.622 13:39:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:10.622 13:39:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:10.622 13:39:49 -- bdev/bdev_raid.sh@287 -- # killprocess 117002 00:16:10.622 13:39:49 -- common/autotest_common.sh@926 -- # '[' -z 117002 ']' 00:16:10.622 13:39:49 -- common/autotest_common.sh@930 -- # kill -0 117002 00:16:10.622 13:39:49 -- common/autotest_common.sh@931 -- # uname 00:16:10.622 13:39:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:10.622 13:39:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117002 00:16:10.622 killing process with pid 117002 00:16:10.622 13:39:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:10.622 13:39:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:10.622 13:39:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117002' 00:16:10.622 13:39:49 -- common/autotest_common.sh@945 -- # kill 117002 00:16:10.622 13:39:49 -- common/autotest_common.sh@950 -- # wait 117002 00:16:10.622 [2024-07-10 13:39:49.787027] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.622 [2024-07-10 13:39:49.787174] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.000 ************************************ 00:16:12.000 END TEST raid_state_function_test 00:16:12.000 ************************************ 00:16:12.000 13:39:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:12.000 00:16:12.000 real 0m9.335s 00:16:12.000 user 0m15.805s 00:16:12.000 sys 0m1.169s 00:16:12.001 13:39:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.001 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:12.001 13:39:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:12.001 13:39:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.001 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.001 ************************************ 00:16:12.001 START TEST raid_state_function_test_sb 00:16:12.001 ************************************ 00:16:12.001 13:39:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=117323 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:12.001 Process raid pid: 117323 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117323' 00:16:12.001 13:39:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117323 /var/tmp/spdk-raid.sock 00:16:12.001 13:39:51 -- common/autotest_common.sh@819 -- # '[' -z 117323 ']' 00:16:12.001 13:39:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.001 13:39:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:12.001 13:39:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.001 13:39:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:12.001 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:16:12.001 [2024-07-10 13:39:51.205353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:12.001 [2024-07-10 13:39:51.205578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.260 [2024-07-10 13:39:51.364399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.260 [2024-07-10 13:39:51.574822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.519 [2024-07-10 13:39:51.785475] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.779 13:39:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.779 13:39:52 -- common/autotest_common.sh@852 -- # return 0 00:16:12.779 13:39:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:13.039 [2024-07-10 13:39:52.227177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.039 [2024-07-10 13:39:52.227331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.039 [2024-07-10 13:39:52.227362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.039 [2024-07-10 13:39:52.227387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.039 13:39:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.299 13:39:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.299 "name": "Existed_Raid", 00:16:13.299 "uuid": "e476c6ba-4bd0-4426-b2c8-52f535953f60", 00:16:13.299 "strip_size_kb": 0, 00:16:13.299 "state": "configuring", 00:16:13.299 "raid_level": "raid1", 00:16:13.299 "superblock": true, 00:16:13.299 "num_base_bdevs": 2, 00:16:13.299 "num_base_bdevs_discovered": 0, 00:16:13.299 "num_base_bdevs_operational": 2, 00:16:13.299 "base_bdevs_list": [ 00:16:13.299 { 00:16:13.299 "name": "BaseBdev1", 00:16:13.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.299 "is_configured": false, 00:16:13.299 "data_offset": 0, 00:16:13.299 "data_size": 0 00:16:13.299 }, 00:16:13.299 { 00:16:13.299 "name": "BaseBdev2", 00:16:13.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.299 "is_configured": false, 00:16:13.299 "data_offset": 0, 00:16:13.299 "data_size": 0 00:16:13.299 } 00:16:13.299 ] 00:16:13.299 }' 00:16:13.299 13:39:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.299 13:39:52 -- common/autotest_common.sh@10 -- # set +x 00:16:13.866 13:39:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:14.125 [2024-07-10 13:39:53.225264] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.125 [2024-07-10 13:39:53.225359] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:14.125 13:39:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:14.125 [2024-07-10 13:39:53.405002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.125 [2024-07-10 13:39:53.405141] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.125 [2024-07-10 13:39:53.405170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.125 [2024-07-10 13:39:53.405200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.125 13:39:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.383 [2024-07-10 13:39:53.617010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.383 BaseBdev1 00:16:14.383 13:39:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:14.383 13:39:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:14.383 13:39:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:14.383 13:39:53 -- common/autotest_common.sh@889 -- # local i 00:16:14.383 13:39:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:14.383 13:39:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:14.383 13:39:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.642 13:39:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.642 [ 00:16:14.642 { 00:16:14.642 "name": "BaseBdev1", 00:16:14.642 "aliases": [ 00:16:14.642 "a31ea763-cef5-4ed5-97fe-df3ab5b5a3b5" 00:16:14.642 ], 00:16:14.642 "product_name": "Malloc disk", 00:16:14.642 "block_size": 512, 00:16:14.642 "num_blocks": 65536, 00:16:14.642 "uuid": "a31ea763-cef5-4ed5-97fe-df3ab5b5a3b5", 00:16:14.642 "assigned_rate_limits": { 00:16:14.642 "rw_ios_per_sec": 0, 00:16:14.642 "rw_mbytes_per_sec": 0, 00:16:14.642 "r_mbytes_per_sec": 0, 00:16:14.642 "w_mbytes_per_sec": 0 00:16:14.642 }, 00:16:14.642 "claimed": true, 00:16:14.642 "claim_type": "exclusive_write", 00:16:14.642 "zoned": false, 00:16:14.642 "supported_io_types": { 00:16:14.642 "read": true, 00:16:14.642 "write": true, 00:16:14.642 "unmap": true, 00:16:14.642 "write_zeroes": true, 00:16:14.642 "flush": true, 00:16:14.642 "reset": true, 00:16:14.642 "compare": false, 00:16:14.642 "compare_and_write": false, 00:16:14.642 "abort": true, 00:16:14.642 "nvme_admin": false, 00:16:14.642 "nvme_io": false 00:16:14.642 }, 00:16:14.642 "memory_domains": [ 00:16:14.642 { 00:16:14.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.642 "dma_device_type": 2 00:16:14.642 } 00:16:14.642 ], 00:16:14.642 "driver_specific": {} 00:16:14.642 } 00:16:14.642 ] 00:16:14.927 13:39:54 -- common/autotest_common.sh@895 -- # return 0 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.927 "name": "Existed_Raid", 00:16:14.927 "uuid": "d2104c04-2716-4739-9843-c88a3725c48c", 00:16:14.927 "strip_size_kb": 0, 00:16:14.927 "state": "configuring", 00:16:14.927 "raid_level": "raid1", 00:16:14.927 "superblock": true, 00:16:14.927 "num_base_bdevs": 2, 00:16:14.927 "num_base_bdevs_discovered": 1, 00:16:14.927 "num_base_bdevs_operational": 2, 00:16:14.927 "base_bdevs_list": [ 00:16:14.927 { 00:16:14.927 "name": "BaseBdev1", 00:16:14.927 "uuid": "a31ea763-cef5-4ed5-97fe-df3ab5b5a3b5", 00:16:14.927 "is_configured": true, 00:16:14.927 "data_offset": 2048, 00:16:14.927 "data_size": 63488 00:16:14.927 }, 00:16:14.927 { 00:16:14.927 "name": "BaseBdev2", 00:16:14.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.927 "is_configured": false, 00:16:14.927 "data_offset": 0, 00:16:14.927 "data_size": 0 00:16:14.927 } 00:16:14.927 ] 00:16:14.927 }' 00:16:14.927 13:39:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.927 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:16:15.575 13:39:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:15.575 [2024-07-10 13:39:54.882814] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.575 [2024-07-10 13:39:54.882943] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:15.575 13:39:54 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:15.575 13:39:54 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:15.835 13:39:55 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.095 BaseBdev1 00:16:16.095 13:39:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:16.095 13:39:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:16.095 13:39:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:16.095 13:39:55 -- common/autotest_common.sh@889 -- # local i 00:16:16.095 13:39:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:16.095 13:39:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:16.095 13:39:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.355 13:39:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.615 [ 00:16:16.615 { 00:16:16.615 "name": "BaseBdev1", 00:16:16.615 "aliases": [ 00:16:16.615 "b9672d55-557e-4b8e-8e9d-6cf5c52d10e0" 00:16:16.615 ], 00:16:16.615 "product_name": "Malloc disk", 00:16:16.615 "block_size": 512, 00:16:16.615 "num_blocks": 65536, 00:16:16.615 "uuid": "b9672d55-557e-4b8e-8e9d-6cf5c52d10e0", 00:16:16.615 "assigned_rate_limits": { 00:16:16.615 "rw_ios_per_sec": 0, 00:16:16.615 "rw_mbytes_per_sec": 0, 00:16:16.615 "r_mbytes_per_sec": 0, 00:16:16.615 "w_mbytes_per_sec": 0 00:16:16.615 }, 00:16:16.615 "claimed": false, 00:16:16.615 "zoned": false, 00:16:16.615 "supported_io_types": { 00:16:16.615 "read": true, 00:16:16.615 "write": true, 00:16:16.615 "unmap": true, 00:16:16.615 "write_zeroes": true, 00:16:16.615 "flush": true, 00:16:16.615 "reset": true, 00:16:16.615 "compare": false, 00:16:16.615 "compare_and_write": false, 00:16:16.615 "abort": true, 00:16:16.615 "nvme_admin": false, 00:16:16.615 "nvme_io": false 00:16:16.615 }, 00:16:16.615 "memory_domains": [ 00:16:16.615 { 00:16:16.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.615 "dma_device_type": 2 00:16:16.615 } 00:16:16.615 ], 00:16:16.615 "driver_specific": {} 00:16:16.615 } 00:16:16.615 ] 00:16:16.615 13:39:55 -- common/autotest_common.sh@895 -- # return 0 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:16.615 [2024-07-10 13:39:55.895268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.615 [2024-07-10 13:39:55.897126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.615 [2024-07-10 13:39:55.897228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.615 13:39:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.876 13:39:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.876 "name": "Existed_Raid", 00:16:16.876 "uuid": "451f8d6a-b5f0-4c8e-ba3f-54d31e4689c2", 00:16:16.876 "strip_size_kb": 0, 00:16:16.876 "state": "configuring", 00:16:16.876 "raid_level": "raid1", 00:16:16.876 "superblock": true, 00:16:16.876 "num_base_bdevs": 2, 00:16:16.876 "num_base_bdevs_discovered": 1, 00:16:16.876 "num_base_bdevs_operational": 2, 00:16:16.876 "base_bdevs_list": [ 00:16:16.876 { 00:16:16.876 "name": "BaseBdev1", 00:16:16.876 "uuid": "b9672d55-557e-4b8e-8e9d-6cf5c52d10e0", 00:16:16.876 "is_configured": true, 00:16:16.876 "data_offset": 2048, 00:16:16.876 "data_size": 63488 00:16:16.876 }, 00:16:16.876 { 00:16:16.876 "name": "BaseBdev2", 00:16:16.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.876 "is_configured": false, 00:16:16.876 "data_offset": 0, 00:16:16.876 "data_size": 0 00:16:16.876 } 00:16:16.876 ] 00:16:16.876 }' 00:16:16.876 13:39:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.876 13:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.445 13:39:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.704 [2024-07-10 13:39:56.853115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.704 [2024-07-10 13:39:56.853393] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:17.704 [2024-07-10 13:39:56.853423] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:17.704 [2024-07-10 13:39:56.853584] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:17.704 BaseBdev2 00:16:17.704 [2024-07-10 13:39:56.853907] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:17.704 [2024-07-10 13:39:56.853918] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:17.704 [2024-07-10 13:39:56.854057] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.704 13:39:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:17.704 13:39:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:17.704 13:39:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:17.704 13:39:56 -- common/autotest_common.sh@889 -- # local i 00:16:17.704 13:39:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:17.704 13:39:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:17.704 13:39:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.704 13:39:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.964 [ 00:16:17.964 { 00:16:17.964 "name": "BaseBdev2", 00:16:17.964 "aliases": [ 00:16:17.964 "628254a6-0244-4778-8b19-450b33aac928" 00:16:17.964 ], 00:16:17.964 "product_name": "Malloc disk", 00:16:17.964 "block_size": 512, 00:16:17.964 "num_blocks": 65536, 00:16:17.964 "uuid": "628254a6-0244-4778-8b19-450b33aac928", 00:16:17.964 "assigned_rate_limits": { 00:16:17.964 "rw_ios_per_sec": 0, 00:16:17.964 "rw_mbytes_per_sec": 0, 00:16:17.964 "r_mbytes_per_sec": 0, 00:16:17.964 "w_mbytes_per_sec": 0 00:16:17.964 }, 00:16:17.964 "claimed": true, 00:16:17.964 "claim_type": "exclusive_write", 00:16:17.964 "zoned": false, 00:16:17.964 "supported_io_types": { 00:16:17.964 "read": true, 00:16:17.964 "write": true, 00:16:17.964 "unmap": true, 00:16:17.964 "write_zeroes": true, 00:16:17.964 "flush": true, 00:16:17.964 "reset": true, 00:16:17.964 "compare": false, 00:16:17.964 "compare_and_write": false, 00:16:17.964 "abort": true, 00:16:17.964 "nvme_admin": false, 00:16:17.964 "nvme_io": false 00:16:17.964 }, 00:16:17.964 "memory_domains": [ 00:16:17.964 { 00:16:17.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.964 "dma_device_type": 2 00:16:17.964 } 00:16:17.964 ], 00:16:17.964 "driver_specific": {} 00:16:17.964 } 00:16:17.964 ] 00:16:17.964 13:39:57 -- common/autotest_common.sh@895 -- # return 0 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.964 13:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.223 13:39:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.223 "name": "Existed_Raid", 00:16:18.223 "uuid": "451f8d6a-b5f0-4c8e-ba3f-54d31e4689c2", 00:16:18.223 "strip_size_kb": 0, 00:16:18.223 "state": "online", 00:16:18.223 "raid_level": "raid1", 00:16:18.223 "superblock": true, 00:16:18.223 "num_base_bdevs": 2, 00:16:18.223 "num_base_bdevs_discovered": 2, 00:16:18.223 "num_base_bdevs_operational": 2, 00:16:18.223 "base_bdevs_list": [ 00:16:18.223 { 00:16:18.223 "name": "BaseBdev1", 00:16:18.223 "uuid": "b9672d55-557e-4b8e-8e9d-6cf5c52d10e0", 00:16:18.223 "is_configured": true, 00:16:18.223 "data_offset": 2048, 00:16:18.223 "data_size": 63488 00:16:18.223 }, 00:16:18.223 { 00:16:18.223 "name": "BaseBdev2", 00:16:18.223 "uuid": "628254a6-0244-4778-8b19-450b33aac928", 00:16:18.223 "is_configured": true, 00:16:18.223 "data_offset": 2048, 00:16:18.223 "data_size": 63488 00:16:18.223 } 00:16:18.223 ] 00:16:18.224 }' 00:16:18.224 13:39:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.224 13:39:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 13:39:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:19.053 [2024-07-10 13:39:58.238828] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.053 13:39:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.312 13:39:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.312 "name": "Existed_Raid", 00:16:19.312 "uuid": "451f8d6a-b5f0-4c8e-ba3f-54d31e4689c2", 00:16:19.312 "strip_size_kb": 0, 00:16:19.312 "state": "online", 00:16:19.312 "raid_level": "raid1", 00:16:19.312 "superblock": true, 00:16:19.312 "num_base_bdevs": 2, 00:16:19.312 "num_base_bdevs_discovered": 1, 00:16:19.312 "num_base_bdevs_operational": 1, 00:16:19.312 "base_bdevs_list": [ 00:16:19.312 { 00:16:19.312 "name": null, 00:16:19.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.312 "is_configured": false, 00:16:19.312 "data_offset": 2048, 00:16:19.312 "data_size": 63488 00:16:19.312 }, 00:16:19.312 { 00:16:19.312 "name": "BaseBdev2", 00:16:19.312 "uuid": "628254a6-0244-4778-8b19-450b33aac928", 00:16:19.312 "is_configured": true, 00:16:19.312 "data_offset": 2048, 00:16:19.312 "data_size": 63488 00:16:19.312 } 00:16:19.312 ] 00:16:19.312 }' 00:16:19.312 13:39:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.312 13:39:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.879 13:39:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:19.879 13:39:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:19.879 13:39:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.879 13:39:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:20.137 13:39:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:20.137 13:39:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.137 13:39:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:20.137 [2024-07-10 13:39:59.425574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.137 [2024-07-10 13:39:59.425654] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.137 [2024-07-10 13:39:59.425738] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.396 [2024-07-10 13:39:59.521147] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.396 [2024-07-10 13:39:59.521250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:20.396 13:39:59 -- bdev/bdev_raid.sh@287 -- # killprocess 117323 00:16:20.396 13:39:59 -- common/autotest_common.sh@926 -- # '[' -z 117323 ']' 00:16:20.396 13:39:59 -- common/autotest_common.sh@930 -- # kill -0 117323 00:16:20.396 13:39:59 -- common/autotest_common.sh@931 -- # uname 00:16:20.396 13:39:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:20.396 13:39:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117323 00:16:20.396 killing process with pid 117323 00:16:20.396 13:39:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:20.396 13:39:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:20.396 13:39:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117323' 00:16:20.396 13:39:59 -- common/autotest_common.sh@945 -- # kill 117323 00:16:20.396 13:39:59 -- common/autotest_common.sh@950 -- # wait 117323 00:16:20.396 [2024-07-10 13:39:59.748465] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.396 [2024-07-10 13:39:59.748603] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.793 ************************************ 00:16:21.793 END TEST raid_state_function_test_sb 00:16:21.793 ************************************ 00:16:21.793 13:40:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:21.793 00:16:21.793 real 0m9.841s 00:16:21.793 user 0m16.768s 00:16:21.793 sys 0m1.152s 00:16:21.793 13:40:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.793 13:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:21.793 13:40:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:21.793 13:40:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.793 13:40:01 -- common/autotest_common.sh@10 -- # set +x 00:16:21.793 ************************************ 00:16:21.793 START TEST raid_superblock_test 00:16:21.793 ************************************ 00:16:21.793 13:40:01 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@357 -- # raid_pid=117664 00:16:21.793 13:40:01 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117664 /var/tmp/spdk-raid.sock 00:16:21.793 13:40:01 -- common/autotest_common.sh@819 -- # '[' -z 117664 ']' 00:16:21.793 13:40:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.793 13:40:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:21.793 13:40:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.793 13:40:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:21.793 13:40:01 -- common/autotest_common.sh@10 -- # set +x 00:16:21.793 [2024-07-10 13:40:01.107722] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:21.793 [2024-07-10 13:40:01.107994] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117664 ] 00:16:22.053 [2024-07-10 13:40:01.275406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.312 [2024-07-10 13:40:01.461161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.312 [2024-07-10 13:40:01.664363] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.571 13:40:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:22.571 13:40:01 -- common/autotest_common.sh@852 -- # return 0 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:22.571 13:40:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:22.829 malloc1 00:16:22.830 13:40:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.088 [2024-07-10 13:40:02.289229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.088 [2024-07-10 13:40:02.289360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.088 [2024-07-10 13:40:02.289413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:23.088 [2024-07-10 13:40:02.289463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.088 [2024-07-10 13:40:02.291216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.088 [2024-07-10 13:40:02.291291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.088 pt1 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.088 13:40:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:23.346 malloc2 00:16:23.347 13:40:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.347 [2024-07-10 13:40:02.700885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.347 [2024-07-10 13:40:02.701031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.347 [2024-07-10 13:40:02.701100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:23.347 [2024-07-10 13:40:02.701165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.605 [2024-07-10 13:40:02.703340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.605 [2024-07-10 13:40:02.703419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.605 pt2 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:23.605 [2024-07-10 13:40:02.864656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.605 [2024-07-10 13:40:02.866287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.605 [2024-07-10 13:40:02.866516] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:23.605 [2024-07-10 13:40:02.866555] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:23.605 [2024-07-10 13:40:02.866694] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:23.605 [2024-07-10 13:40:02.867039] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:23.605 [2024-07-10 13:40:02.867077] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:23.605 [2024-07-10 13:40:02.867244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.605 13:40:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.863 13:40:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.863 "name": "raid_bdev1", 00:16:23.863 "uuid": "33462b43-265c-4b4a-bdc5-14ed85b3e006", 00:16:23.863 "strip_size_kb": 0, 00:16:23.863 "state": "online", 00:16:23.863 "raid_level": "raid1", 00:16:23.863 "superblock": true, 00:16:23.863 "num_base_bdevs": 2, 00:16:23.863 "num_base_bdevs_discovered": 2, 00:16:23.863 "num_base_bdevs_operational": 2, 00:16:23.863 "base_bdevs_list": [ 00:16:23.863 { 00:16:23.863 "name": "pt1", 00:16:23.863 "uuid": "81b25725-900c-5e84-b540-c3522c9321d2", 00:16:23.863 "is_configured": true, 00:16:23.863 "data_offset": 2048, 00:16:23.863 "data_size": 63488 00:16:23.863 }, 00:16:23.863 { 00:16:23.863 "name": "pt2", 00:16:23.863 "uuid": "273ae0b1-b0a7-5416-aafa-5ea431cfcb27", 00:16:23.863 "is_configured": true, 00:16:23.863 "data_offset": 2048, 00:16:23.863 "data_size": 63488 00:16:23.863 } 00:16:23.863 ] 00:16:23.863 }' 00:16:23.863 13:40:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.863 13:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.430 13:40:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:24.430 13:40:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:24.688 [2024-07-10 13:40:03.910933] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.688 13:40:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=33462b43-265c-4b4a-bdc5-14ed85b3e006 00:16:24.688 13:40:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 33462b43-265c-4b4a-bdc5-14ed85b3e006 ']' 00:16:24.688 13:40:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:24.948 [2024-07-10 13:40:04.082480] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.948 [2024-07-10 13:40:04.082580] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.948 [2024-07-10 13:40:04.082673] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.948 [2024-07-10 13:40:04.082743] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.948 [2024-07-10 13:40:04.082760] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:24.948 13:40:04 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.948 13:40:04 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:24.948 13:40:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:24.948 13:40:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:24.948 13:40:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.948 13:40:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:25.207 13:40:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.207 13:40:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:25.466 13:40:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:25.466 13:40:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:25.726 13:40:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:25.726 13:40:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:25.726 13:40:04 -- common/autotest_common.sh@640 -- # local es=0 00:16:25.726 13:40:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:25.726 13:40:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.726 13:40:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:25.726 13:40:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.726 13:40:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:25.726 13:40:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.726 13:40:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:25.726 13:40:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.726 13:40:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:25.726 13:40:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:25.726 [2024-07-10 13:40:05.000805] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:25.726 [2024-07-10 13:40:05.002478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:25.726 [2024-07-10 13:40:05.002576] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:25.726 [2024-07-10 13:40:05.002667] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:25.726 [2024-07-10 13:40:05.002707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.726 [2024-07-10 13:40:05.002727] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:25.726 request: 00:16:25.726 { 00:16:25.726 "name": "raid_bdev1", 00:16:25.726 "raid_level": "raid1", 00:16:25.726 "base_bdevs": [ 00:16:25.726 "malloc1", 00:16:25.726 "malloc2" 00:16:25.726 ], 00:16:25.726 "superblock": false, 00:16:25.726 "method": "bdev_raid_create", 00:16:25.726 "req_id": 1 00:16:25.726 } 00:16:25.726 Got JSON-RPC error response 00:16:25.726 response: 00:16:25.726 { 00:16:25.726 "code": -17, 00:16:25.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:25.726 } 00:16:25.726 13:40:05 -- common/autotest_common.sh@643 -- # es=1 00:16:25.726 13:40:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:25.726 13:40:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:25.726 13:40:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:25.726 13:40:05 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.726 13:40:05 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:25.986 13:40:05 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:25.986 13:40:05 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:25.986 13:40:05 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.246 [2024-07-10 13:40:05.356201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.246 [2024-07-10 13:40:05.356335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.246 [2024-07-10 13:40:05.356382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:26.246 [2024-07-10 13:40:05.356443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.246 [2024-07-10 13:40:05.358188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.246 [2024-07-10 13:40:05.358260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.246 [2024-07-10 13:40:05.358389] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:26.246 [2024-07-10 13:40:05.358451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.246 pt1 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.246 "name": "raid_bdev1", 00:16:26.246 "uuid": "33462b43-265c-4b4a-bdc5-14ed85b3e006", 00:16:26.246 "strip_size_kb": 0, 00:16:26.246 "state": "configuring", 00:16:26.246 "raid_level": "raid1", 00:16:26.246 "superblock": true, 00:16:26.246 "num_base_bdevs": 2, 00:16:26.246 "num_base_bdevs_discovered": 1, 00:16:26.246 "num_base_bdevs_operational": 2, 00:16:26.246 "base_bdevs_list": [ 00:16:26.246 { 00:16:26.246 "name": "pt1", 00:16:26.246 "uuid": "81b25725-900c-5e84-b540-c3522c9321d2", 00:16:26.246 "is_configured": true, 00:16:26.246 "data_offset": 2048, 00:16:26.246 "data_size": 63488 00:16:26.246 }, 00:16:26.246 { 00:16:26.246 "name": null, 00:16:26.246 "uuid": "273ae0b1-b0a7-5416-aafa-5ea431cfcb27", 00:16:26.246 "is_configured": false, 00:16:26.246 "data_offset": 2048, 00:16:26.246 "data_size": 63488 00:16:26.246 } 00:16:26.246 ] 00:16:26.246 }' 00:16:26.246 13:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.246 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.815 13:40:06 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:26.815 13:40:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:26.815 13:40:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:26.815 13:40:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.074 [2024-07-10 13:40:06.298563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.074 [2024-07-10 13:40:06.298717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.074 [2024-07-10 13:40:06.298778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:27.074 [2024-07-10 13:40:06.298815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.074 [2024-07-10 13:40:06.299212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.074 [2024-07-10 13:40:06.299277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.074 [2024-07-10 13:40:06.299396] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:27.074 [2024-07-10 13:40:06.299440] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.074 [2024-07-10 13:40:06.299559] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:27.074 [2024-07-10 13:40:06.299589] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.074 [2024-07-10 13:40:06.299730] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:27.074 [2024-07-10 13:40:06.300034] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:27.074 [2024-07-10 13:40:06.300072] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:27.074 [2024-07-10 13:40:06.300234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.074 pt2 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.074 13:40:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.333 13:40:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.333 "name": "raid_bdev1", 00:16:27.333 "uuid": "33462b43-265c-4b4a-bdc5-14ed85b3e006", 00:16:27.333 "strip_size_kb": 0, 00:16:27.333 "state": "online", 00:16:27.333 "raid_level": "raid1", 00:16:27.333 "superblock": true, 00:16:27.333 "num_base_bdevs": 2, 00:16:27.333 "num_base_bdevs_discovered": 2, 00:16:27.333 "num_base_bdevs_operational": 2, 00:16:27.333 "base_bdevs_list": [ 00:16:27.333 { 00:16:27.333 "name": "pt1", 00:16:27.333 "uuid": "81b25725-900c-5e84-b540-c3522c9321d2", 00:16:27.333 "is_configured": true, 00:16:27.334 "data_offset": 2048, 00:16:27.334 "data_size": 63488 00:16:27.334 }, 00:16:27.334 { 00:16:27.334 "name": "pt2", 00:16:27.334 "uuid": "273ae0b1-b0a7-5416-aafa-5ea431cfcb27", 00:16:27.334 "is_configured": true, 00:16:27.334 "data_offset": 2048, 00:16:27.334 "data_size": 63488 00:16:27.334 } 00:16:27.334 ] 00:16:27.334 }' 00:16:27.334 13:40:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.334 13:40:06 -- common/autotest_common.sh@10 -- # set +x 00:16:27.903 13:40:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:27.903 13:40:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:27.903 [2024-07-10 13:40:07.249076] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@430 -- # '[' 33462b43-265c-4b4a-bdc5-14ed85b3e006 '!=' 33462b43-265c-4b4a-bdc5-14ed85b3e006 ']' 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:28.163 [2024-07-10 13:40:07.428620] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.163 13:40:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.423 13:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.423 "name": "raid_bdev1", 00:16:28.423 "uuid": "33462b43-265c-4b4a-bdc5-14ed85b3e006", 00:16:28.423 "strip_size_kb": 0, 00:16:28.423 "state": "online", 00:16:28.423 "raid_level": "raid1", 00:16:28.423 "superblock": true, 00:16:28.423 "num_base_bdevs": 2, 00:16:28.423 "num_base_bdevs_discovered": 1, 00:16:28.423 "num_base_bdevs_operational": 1, 00:16:28.423 "base_bdevs_list": [ 00:16:28.423 { 00:16:28.423 "name": null, 00:16:28.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.423 "is_configured": false, 00:16:28.423 "data_offset": 2048, 00:16:28.423 "data_size": 63488 00:16:28.423 }, 00:16:28.423 { 00:16:28.423 "name": "pt2", 00:16:28.423 "uuid": "273ae0b1-b0a7-5416-aafa-5ea431cfcb27", 00:16:28.423 "is_configured": true, 00:16:28.423 "data_offset": 2048, 00:16:28.423 "data_size": 63488 00:16:28.423 } 00:16:28.423 ] 00:16:28.423 }' 00:16:28.423 13:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.423 13:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:28.992 13:40:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:29.253 [2024-07-10 13:40:08.402805] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.253 [2024-07-10 13:40:08.402893] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.253 [2024-07-10 13:40:08.402979] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.253 [2024-07-10 13:40:08.403030] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.253 [2024-07-10 13:40:08.403062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:29.253 13:40:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:29.513 13:40:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:29.513 13:40:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:29.513 13:40:08 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:29.513 13:40:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:29.513 13:40:08 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:29.513 13:40:08 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.772 [2024-07-10 13:40:08.947089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.772 [2024-07-10 13:40:08.947291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.772 [2024-07-10 13:40:08.947335] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:29.772 [2024-07-10 13:40:08.947378] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.772 [2024-07-10 13:40:08.949378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.772 [2024-07-10 13:40:08.949485] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.772 [2024-07-10 13:40:08.949626] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:29.772 [2024-07-10 13:40:08.949710] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.772 [2024-07-10 13:40:08.949860] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:29.772 [2024-07-10 13:40:08.949894] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:29.772 [2024-07-10 13:40:08.950029] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:29.772 [2024-07-10 13:40:08.950364] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:29.772 [2024-07-10 13:40:08.950406] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:29.772 [2024-07-10 13:40:08.950565] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.772 pt2 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.772 13:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.031 13:40:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.031 "name": "raid_bdev1", 00:16:30.031 "uuid": "33462b43-265c-4b4a-bdc5-14ed85b3e006", 00:16:30.031 "strip_size_kb": 0, 00:16:30.031 "state": "online", 00:16:30.031 "raid_level": "raid1", 00:16:30.031 "superblock": true, 00:16:30.031 "num_base_bdevs": 2, 00:16:30.031 "num_base_bdevs_discovered": 1, 00:16:30.031 "num_base_bdevs_operational": 1, 00:16:30.031 "base_bdevs_list": [ 00:16:30.031 { 00:16:30.031 "name": null, 00:16:30.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.031 "is_configured": false, 00:16:30.031 "data_offset": 2048, 00:16:30.031 "data_size": 63488 00:16:30.031 }, 00:16:30.031 { 00:16:30.031 "name": "pt2", 00:16:30.031 "uuid": "273ae0b1-b0a7-5416-aafa-5ea431cfcb27", 00:16:30.031 "is_configured": true, 00:16:30.031 "data_offset": 2048, 00:16:30.031 "data_size": 63488 00:16:30.031 } 00:16:30.031 ] 00:16:30.031 }' 00:16:30.031 13:40:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.031 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.599 13:40:09 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:30.599 13:40:09 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:30.599 13:40:09 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:30.599 [2024-07-10 13:40:09.913568] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.599 13:40:09 -- bdev/bdev_raid.sh@506 -- # '[' 33462b43-265c-4b4a-bdc5-14ed85b3e006 '!=' 33462b43-265c-4b4a-bdc5-14ed85b3e006 ']' 00:16:30.599 13:40:09 -- bdev/bdev_raid.sh@511 -- # killprocess 117664 00:16:30.599 13:40:09 -- common/autotest_common.sh@926 -- # '[' -z 117664 ']' 00:16:30.599 13:40:09 -- common/autotest_common.sh@930 -- # kill -0 117664 00:16:30.599 13:40:09 -- common/autotest_common.sh@931 -- # uname 00:16:30.600 13:40:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.600 13:40:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117664 00:16:30.858 killing process with pid 117664 00:16:30.859 13:40:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:30.859 13:40:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:30.859 13:40:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117664' 00:16:30.859 13:40:09 -- common/autotest_common.sh@945 -- # kill 117664 00:16:30.859 13:40:09 -- common/autotest_common.sh@950 -- # wait 117664 00:16:30.859 [2024-07-10 13:40:09.960803] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.859 [2024-07-10 13:40:09.960872] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.859 [2024-07-10 13:40:09.960952] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.859 [2024-07-10 13:40:09.960981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:30.859 [2024-07-10 13:40:10.149877] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.234 ************************************ 00:16:32.234 END TEST raid_superblock_test 00:16:32.234 ************************************ 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:32.234 00:16:32.234 real 0m10.339s 00:16:32.234 user 0m17.988s 00:16:32.234 sys 0m1.293s 00:16:32.234 13:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.234 13:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:32.234 13:40:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:32.234 13:40:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.234 13:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 ************************************ 00:16:32.234 START TEST raid_state_function_test 00:16:32.234 ************************************ 00:16:32.234 13:40:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=118016 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118016' 00:16:32.234 Process raid pid: 118016 00:16:32.234 13:40:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118016 /var/tmp/spdk-raid.sock 00:16:32.234 13:40:11 -- common/autotest_common.sh@819 -- # '[' -z 118016 ']' 00:16:32.234 13:40:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:32.234 13:40:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.234 13:40:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:32.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:32.234 13:40:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.234 13:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.234 [2024-07-10 13:40:11.517924] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:32.234 [2024-07-10 13:40:11.518150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.492 [2024-07-10 13:40:11.675818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.751 [2024-07-10 13:40:11.872914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.751 [2024-07-10 13:40:12.065322] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.010 13:40:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:33.010 13:40:12 -- common/autotest_common.sh@852 -- # return 0 00:16:33.010 13:40:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:33.268 [2024-07-10 13:40:12.503209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.268 [2024-07-10 13:40:12.503362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.268 [2024-07-10 13:40:12.503392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.268 [2024-07-10 13:40:12.503416] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.268 [2024-07-10 13:40:12.503430] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.268 [2024-07-10 13:40:12.503471] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.268 13:40:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.526 13:40:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.526 "name": "Existed_Raid", 00:16:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.526 "strip_size_kb": 64, 00:16:33.526 "state": "configuring", 00:16:33.526 "raid_level": "raid0", 00:16:33.526 "superblock": false, 00:16:33.526 "num_base_bdevs": 3, 00:16:33.526 "num_base_bdevs_discovered": 0, 00:16:33.526 "num_base_bdevs_operational": 3, 00:16:33.526 "base_bdevs_list": [ 00:16:33.526 { 00:16:33.526 "name": "BaseBdev1", 00:16:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.526 "is_configured": false, 00:16:33.526 "data_offset": 0, 00:16:33.526 "data_size": 0 00:16:33.526 }, 00:16:33.526 { 00:16:33.526 "name": "BaseBdev2", 00:16:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.526 "is_configured": false, 00:16:33.526 "data_offset": 0, 00:16:33.526 "data_size": 0 00:16:33.526 }, 00:16:33.526 { 00:16:33.526 "name": "BaseBdev3", 00:16:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.526 "is_configured": false, 00:16:33.526 "data_offset": 0, 00:16:33.526 "data_size": 0 00:16:33.526 } 00:16:33.526 ] 00:16:33.526 }' 00:16:33.526 13:40:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.526 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:16:34.091 13:40:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.350 [2024-07-10 13:40:13.469418] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.350 [2024-07-10 13:40:13.469527] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:34.350 13:40:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:34.350 [2024-07-10 13:40:13.661095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.350 [2024-07-10 13:40:13.661207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.350 [2024-07-10 13:40:13.661232] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.350 [2024-07-10 13:40:13.661255] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.350 [2024-07-10 13:40:13.661268] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.350 [2024-07-10 13:40:13.661301] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.350 13:40:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.609 [2024-07-10 13:40:13.870274] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.609 BaseBdev1 00:16:34.609 13:40:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:34.609 13:40:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:34.609 13:40:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:34.609 13:40:13 -- common/autotest_common.sh@889 -- # local i 00:16:34.609 13:40:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:34.609 13:40:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:34.609 13:40:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.868 13:40:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.126 [ 00:16:35.126 { 00:16:35.126 "name": "BaseBdev1", 00:16:35.126 "aliases": [ 00:16:35.126 "265a36ec-e692-449c-8bb2-f129390e419a" 00:16:35.126 ], 00:16:35.126 "product_name": "Malloc disk", 00:16:35.126 "block_size": 512, 00:16:35.126 "num_blocks": 65536, 00:16:35.126 "uuid": "265a36ec-e692-449c-8bb2-f129390e419a", 00:16:35.126 "assigned_rate_limits": { 00:16:35.126 "rw_ios_per_sec": 0, 00:16:35.126 "rw_mbytes_per_sec": 0, 00:16:35.126 "r_mbytes_per_sec": 0, 00:16:35.126 "w_mbytes_per_sec": 0 00:16:35.126 }, 00:16:35.126 "claimed": true, 00:16:35.126 "claim_type": "exclusive_write", 00:16:35.126 "zoned": false, 00:16:35.126 "supported_io_types": { 00:16:35.126 "read": true, 00:16:35.126 "write": true, 00:16:35.126 "unmap": true, 00:16:35.126 "write_zeroes": true, 00:16:35.126 "flush": true, 00:16:35.126 "reset": true, 00:16:35.126 "compare": false, 00:16:35.126 "compare_and_write": false, 00:16:35.126 "abort": true, 00:16:35.126 "nvme_admin": false, 00:16:35.126 "nvme_io": false 00:16:35.126 }, 00:16:35.126 "memory_domains": [ 00:16:35.126 { 00:16:35.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.127 "dma_device_type": 2 00:16:35.127 } 00:16:35.127 ], 00:16:35.127 "driver_specific": {} 00:16:35.127 } 00:16:35.127 ] 00:16:35.127 13:40:14 -- common/autotest_common.sh@895 -- # return 0 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.127 "name": "Existed_Raid", 00:16:35.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.127 "strip_size_kb": 64, 00:16:35.127 "state": "configuring", 00:16:35.127 "raid_level": "raid0", 00:16:35.127 "superblock": false, 00:16:35.127 "num_base_bdevs": 3, 00:16:35.127 "num_base_bdevs_discovered": 1, 00:16:35.127 "num_base_bdevs_operational": 3, 00:16:35.127 "base_bdevs_list": [ 00:16:35.127 { 00:16:35.127 "name": "BaseBdev1", 00:16:35.127 "uuid": "265a36ec-e692-449c-8bb2-f129390e419a", 00:16:35.127 "is_configured": true, 00:16:35.127 "data_offset": 0, 00:16:35.127 "data_size": 65536 00:16:35.127 }, 00:16:35.127 { 00:16:35.127 "name": "BaseBdev2", 00:16:35.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.127 "is_configured": false, 00:16:35.127 "data_offset": 0, 00:16:35.127 "data_size": 0 00:16:35.127 }, 00:16:35.127 { 00:16:35.127 "name": "BaseBdev3", 00:16:35.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.127 "is_configured": false, 00:16:35.127 "data_offset": 0, 00:16:35.127 "data_size": 0 00:16:35.127 } 00:16:35.127 ] 00:16:35.127 }' 00:16:35.127 13:40:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.127 13:40:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.695 13:40:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:35.954 [2024-07-10 13:40:15.199937] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.954 [2024-07-10 13:40:15.200069] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:35.954 13:40:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:35.954 13:40:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:36.275 [2024-07-10 13:40:15.375693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.275 [2024-07-10 13:40:15.377396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.275 [2024-07-10 13:40:15.377473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.275 [2024-07-10 13:40:15.377494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.275 [2024-07-10 13:40:15.377538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.275 13:40:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.276 13:40:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.276 13:40:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.276 13:40:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.276 "name": "Existed_Raid", 00:16:36.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.276 "strip_size_kb": 64, 00:16:36.276 "state": "configuring", 00:16:36.276 "raid_level": "raid0", 00:16:36.276 "superblock": false, 00:16:36.276 "num_base_bdevs": 3, 00:16:36.276 "num_base_bdevs_discovered": 1, 00:16:36.276 "num_base_bdevs_operational": 3, 00:16:36.276 "base_bdevs_list": [ 00:16:36.276 { 00:16:36.276 "name": "BaseBdev1", 00:16:36.276 "uuid": "265a36ec-e692-449c-8bb2-f129390e419a", 00:16:36.276 "is_configured": true, 00:16:36.276 "data_offset": 0, 00:16:36.276 "data_size": 65536 00:16:36.276 }, 00:16:36.276 { 00:16:36.276 "name": "BaseBdev2", 00:16:36.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.276 "is_configured": false, 00:16:36.276 "data_offset": 0, 00:16:36.276 "data_size": 0 00:16:36.276 }, 00:16:36.276 { 00:16:36.276 "name": "BaseBdev3", 00:16:36.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.276 "is_configured": false, 00:16:36.276 "data_offset": 0, 00:16:36.276 "data_size": 0 00:16:36.276 } 00:16:36.276 ] 00:16:36.276 }' 00:16:36.276 13:40:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.276 13:40:15 -- common/autotest_common.sh@10 -- # set +x 00:16:36.843 13:40:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.105 [2024-07-10 13:40:16.406655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.105 BaseBdev2 00:16:37.105 13:40:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:37.105 13:40:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:37.105 13:40:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:37.105 13:40:16 -- common/autotest_common.sh@889 -- # local i 00:16:37.105 13:40:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:37.105 13:40:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:37.105 13:40:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.365 13:40:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.624 [ 00:16:37.624 { 00:16:37.624 "name": "BaseBdev2", 00:16:37.624 "aliases": [ 00:16:37.624 "7c9fb529-1ffa-4759-b657-c0c57d7b9144" 00:16:37.624 ], 00:16:37.624 "product_name": "Malloc disk", 00:16:37.624 "block_size": 512, 00:16:37.624 "num_blocks": 65536, 00:16:37.624 "uuid": "7c9fb529-1ffa-4759-b657-c0c57d7b9144", 00:16:37.624 "assigned_rate_limits": { 00:16:37.624 "rw_ios_per_sec": 0, 00:16:37.624 "rw_mbytes_per_sec": 0, 00:16:37.624 "r_mbytes_per_sec": 0, 00:16:37.624 "w_mbytes_per_sec": 0 00:16:37.624 }, 00:16:37.624 "claimed": true, 00:16:37.624 "claim_type": "exclusive_write", 00:16:37.624 "zoned": false, 00:16:37.624 "supported_io_types": { 00:16:37.624 "read": true, 00:16:37.624 "write": true, 00:16:37.624 "unmap": true, 00:16:37.624 "write_zeroes": true, 00:16:37.624 "flush": true, 00:16:37.624 "reset": true, 00:16:37.624 "compare": false, 00:16:37.624 "compare_and_write": false, 00:16:37.624 "abort": true, 00:16:37.624 "nvme_admin": false, 00:16:37.624 "nvme_io": false 00:16:37.624 }, 00:16:37.624 "memory_domains": [ 00:16:37.624 { 00:16:37.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.624 "dma_device_type": 2 00:16:37.624 } 00:16:37.624 ], 00:16:37.624 "driver_specific": {} 00:16:37.624 } 00:16:37.624 ] 00:16:37.624 13:40:16 -- common/autotest_common.sh@895 -- # return 0 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.624 13:40:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.883 13:40:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.883 "name": "Existed_Raid", 00:16:37.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.883 "strip_size_kb": 64, 00:16:37.883 "state": "configuring", 00:16:37.883 "raid_level": "raid0", 00:16:37.883 "superblock": false, 00:16:37.883 "num_base_bdevs": 3, 00:16:37.883 "num_base_bdevs_discovered": 2, 00:16:37.883 "num_base_bdevs_operational": 3, 00:16:37.883 "base_bdevs_list": [ 00:16:37.883 { 00:16:37.883 "name": "BaseBdev1", 00:16:37.883 "uuid": "265a36ec-e692-449c-8bb2-f129390e419a", 00:16:37.883 "is_configured": true, 00:16:37.883 "data_offset": 0, 00:16:37.883 "data_size": 65536 00:16:37.883 }, 00:16:37.883 { 00:16:37.883 "name": "BaseBdev2", 00:16:37.883 "uuid": "7c9fb529-1ffa-4759-b657-c0c57d7b9144", 00:16:37.883 "is_configured": true, 00:16:37.883 "data_offset": 0, 00:16:37.883 "data_size": 65536 00:16:37.883 }, 00:16:37.883 { 00:16:37.883 "name": "BaseBdev3", 00:16:37.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.883 "is_configured": false, 00:16:37.883 "data_offset": 0, 00:16:37.883 "data_size": 0 00:16:37.883 } 00:16:37.883 ] 00:16:37.883 }' 00:16:37.883 13:40:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.883 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:16:38.450 13:40:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.450 [2024-07-10 13:40:17.797211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.450 [2024-07-10 13:40:17.797359] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:38.450 [2024-07-10 13:40:17.797381] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:38.450 [2024-07-10 13:40:17.797548] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:38.450 [2024-07-10 13:40:17.797993] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:38.450 [2024-07-10 13:40:17.798049] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:38.450 [2024-07-10 13:40:17.798348] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.450 BaseBdev3 00:16:38.708 13:40:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:38.708 13:40:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:38.708 13:40:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:38.708 13:40:17 -- common/autotest_common.sh@889 -- # local i 00:16:38.708 13:40:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:38.708 13:40:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:38.708 13:40:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.708 13:40:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.967 [ 00:16:38.967 { 00:16:38.967 "name": "BaseBdev3", 00:16:38.967 "aliases": [ 00:16:38.967 "51ce3cf4-0356-4c40-8e3b-172f31252065" 00:16:38.967 ], 00:16:38.967 "product_name": "Malloc disk", 00:16:38.967 "block_size": 512, 00:16:38.967 "num_blocks": 65536, 00:16:38.967 "uuid": "51ce3cf4-0356-4c40-8e3b-172f31252065", 00:16:38.967 "assigned_rate_limits": { 00:16:38.967 "rw_ios_per_sec": 0, 00:16:38.967 "rw_mbytes_per_sec": 0, 00:16:38.967 "r_mbytes_per_sec": 0, 00:16:38.967 "w_mbytes_per_sec": 0 00:16:38.967 }, 00:16:38.967 "claimed": true, 00:16:38.967 "claim_type": "exclusive_write", 00:16:38.967 "zoned": false, 00:16:38.967 "supported_io_types": { 00:16:38.967 "read": true, 00:16:38.967 "write": true, 00:16:38.967 "unmap": true, 00:16:38.967 "write_zeroes": true, 00:16:38.967 "flush": true, 00:16:38.967 "reset": true, 00:16:38.967 "compare": false, 00:16:38.967 "compare_and_write": false, 00:16:38.967 "abort": true, 00:16:38.967 "nvme_admin": false, 00:16:38.967 "nvme_io": false 00:16:38.967 }, 00:16:38.967 "memory_domains": [ 00:16:38.967 { 00:16:38.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.967 "dma_device_type": 2 00:16:38.967 } 00:16:38.967 ], 00:16:38.967 "driver_specific": {} 00:16:38.967 } 00:16:38.967 ] 00:16:38.967 13:40:18 -- common/autotest_common.sh@895 -- # return 0 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.967 13:40:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.227 13:40:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.227 "name": "Existed_Raid", 00:16:39.227 "uuid": "bb177c8c-5e9d-413c-90f5-b12c5c587ac9", 00:16:39.227 "strip_size_kb": 64, 00:16:39.227 "state": "online", 00:16:39.227 "raid_level": "raid0", 00:16:39.227 "superblock": false, 00:16:39.227 "num_base_bdevs": 3, 00:16:39.227 "num_base_bdevs_discovered": 3, 00:16:39.227 "num_base_bdevs_operational": 3, 00:16:39.227 "base_bdevs_list": [ 00:16:39.227 { 00:16:39.227 "name": "BaseBdev1", 00:16:39.227 "uuid": "265a36ec-e692-449c-8bb2-f129390e419a", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 }, 00:16:39.227 { 00:16:39.227 "name": "BaseBdev2", 00:16:39.227 "uuid": "7c9fb529-1ffa-4759-b657-c0c57d7b9144", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 }, 00:16:39.227 { 00:16:39.227 "name": "BaseBdev3", 00:16:39.227 "uuid": "51ce3cf4-0356-4c40-8e3b-172f31252065", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 } 00:16:39.227 ] 00:16:39.227 }' 00:16:39.227 13:40:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.227 13:40:18 -- common/autotest_common.sh@10 -- # set +x 00:16:39.794 13:40:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:40.053 [2024-07-10 13:40:19.162541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.053 [2024-07-10 13:40:19.162625] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.053 [2024-07-10 13:40:19.162700] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.053 13:40:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.054 13:40:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.054 13:40:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.312 13:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.312 "name": "Existed_Raid", 00:16:40.312 "uuid": "bb177c8c-5e9d-413c-90f5-b12c5c587ac9", 00:16:40.312 "strip_size_kb": 64, 00:16:40.312 "state": "offline", 00:16:40.312 "raid_level": "raid0", 00:16:40.312 "superblock": false, 00:16:40.312 "num_base_bdevs": 3, 00:16:40.312 "num_base_bdevs_discovered": 2, 00:16:40.312 "num_base_bdevs_operational": 2, 00:16:40.312 "base_bdevs_list": [ 00:16:40.312 { 00:16:40.312 "name": null, 00:16:40.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.312 "is_configured": false, 00:16:40.312 "data_offset": 0, 00:16:40.312 "data_size": 65536 00:16:40.312 }, 00:16:40.312 { 00:16:40.312 "name": "BaseBdev2", 00:16:40.312 "uuid": "7c9fb529-1ffa-4759-b657-c0c57d7b9144", 00:16:40.312 "is_configured": true, 00:16:40.312 "data_offset": 0, 00:16:40.312 "data_size": 65536 00:16:40.312 }, 00:16:40.312 { 00:16:40.312 "name": "BaseBdev3", 00:16:40.312 "uuid": "51ce3cf4-0356-4c40-8e3b-172f31252065", 00:16:40.312 "is_configured": true, 00:16:40.312 "data_offset": 0, 00:16:40.312 "data_size": 65536 00:16:40.312 } 00:16:40.312 ] 00:16:40.312 }' 00:16:40.312 13:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.312 13:40:19 -- common/autotest_common.sh@10 -- # set +x 00:16:40.880 13:40:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:40.880 13:40:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.880 13:40:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.880 13:40:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:41.138 13:40:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:41.138 13:40:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.138 13:40:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:41.138 [2024-07-10 13:40:20.441208] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.396 13:40:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:41.654 [2024-07-10 13:40:20.886405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.654 [2024-07-10 13:40:20.886541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:41.654 13:40:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:41.654 13:40:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:41.654 13:40:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.654 13:40:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.917 13:40:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:41.917 13:40:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:41.917 13:40:21 -- bdev/bdev_raid.sh@287 -- # killprocess 118016 00:16:41.917 13:40:21 -- common/autotest_common.sh@926 -- # '[' -z 118016 ']' 00:16:41.917 13:40:21 -- common/autotest_common.sh@930 -- # kill -0 118016 00:16:41.917 13:40:21 -- common/autotest_common.sh@931 -- # uname 00:16:41.917 13:40:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:41.917 13:40:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118016 00:16:41.917 killing process with pid 118016 00:16:41.917 13:40:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:41.917 13:40:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:41.917 13:40:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118016' 00:16:41.917 13:40:21 -- common/autotest_common.sh@945 -- # kill 118016 00:16:41.917 13:40:21 -- common/autotest_common.sh@950 -- # wait 118016 00:16:41.917 [2024-07-10 13:40:21.212979] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.917 [2024-07-10 13:40:21.213128] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.303 ************************************ 00:16:43.303 END TEST raid_state_function_test 00:16:43.303 ************************************ 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:43.303 00:16:43.303 real 0m11.004s 00:16:43.303 user 0m18.931s 00:16:43.303 sys 0m1.421s 00:16:43.303 13:40:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.303 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:43.303 13:40:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:43.303 13:40:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:43.303 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:43.303 ************************************ 00:16:43.303 START TEST raid_state_function_test_sb 00:16:43.303 ************************************ 00:16:43.303 13:40:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=118399 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118399' 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:43.303 Process raid pid: 118399 00:16:43.303 13:40:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118399 /var/tmp/spdk-raid.sock 00:16:43.303 13:40:22 -- common/autotest_common.sh@819 -- # '[' -z 118399 ']' 00:16:43.303 13:40:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:43.303 13:40:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:43.303 13:40:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:43.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:43.303 13:40:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:43.303 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:16:43.303 [2024-07-10 13:40:22.596549] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:43.303 [2024-07-10 13:40:22.596777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.562 [2024-07-10 13:40:22.740297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.821 [2024-07-10 13:40:22.984823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.080 [2024-07-10 13:40:23.227146] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.080 13:40:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:44.080 13:40:23 -- common/autotest_common.sh@852 -- # return 0 00:16:44.080 13:40:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:44.339 [2024-07-10 13:40:23.577332] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.339 [2024-07-10 13:40:23.577512] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.339 [2024-07-10 13:40:23.577540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.339 [2024-07-10 13:40:23.577597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.339 [2024-07-10 13:40:23.577616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.339 [2024-07-10 13:40:23.577673] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.339 13:40:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.597 13:40:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.597 "name": "Existed_Raid", 00:16:44.597 "uuid": "b8c10f85-99f0-4d22-9722-fd9b873929ab", 00:16:44.597 "strip_size_kb": 64, 00:16:44.597 "state": "configuring", 00:16:44.597 "raid_level": "raid0", 00:16:44.597 "superblock": true, 00:16:44.597 "num_base_bdevs": 3, 00:16:44.597 "num_base_bdevs_discovered": 0, 00:16:44.597 "num_base_bdevs_operational": 3, 00:16:44.597 "base_bdevs_list": [ 00:16:44.597 { 00:16:44.597 "name": "BaseBdev1", 00:16:44.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.597 "is_configured": false, 00:16:44.597 "data_offset": 0, 00:16:44.597 "data_size": 0 00:16:44.597 }, 00:16:44.597 { 00:16:44.597 "name": "BaseBdev2", 00:16:44.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.597 "is_configured": false, 00:16:44.597 "data_offset": 0, 00:16:44.597 "data_size": 0 00:16:44.597 }, 00:16:44.597 { 00:16:44.597 "name": "BaseBdev3", 00:16:44.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.597 "is_configured": false, 00:16:44.597 "data_offset": 0, 00:16:44.597 "data_size": 0 00:16:44.597 } 00:16:44.597 ] 00:16:44.597 }' 00:16:44.597 13:40:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.597 13:40:23 -- common/autotest_common.sh@10 -- # set +x 00:16:45.164 13:40:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:45.164 [2024-07-10 13:40:24.499570] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.164 [2024-07-10 13:40:24.499719] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:45.164 13:40:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:45.422 [2024-07-10 13:40:24.695345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.422 [2024-07-10 13:40:24.695500] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.422 [2024-07-10 13:40:24.695527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.422 [2024-07-10 13:40:24.695555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.422 [2024-07-10 13:40:24.695569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.422 [2024-07-10 13:40:24.695612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.422 13:40:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.680 [2024-07-10 13:40:24.930297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.680 BaseBdev1 00:16:45.680 13:40:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:45.680 13:40:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:45.680 13:40:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:45.680 13:40:24 -- common/autotest_common.sh@889 -- # local i 00:16:45.680 13:40:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:45.680 13:40:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:45.680 13:40:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.938 13:40:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.196 [ 00:16:46.196 { 00:16:46.196 "name": "BaseBdev1", 00:16:46.196 "aliases": [ 00:16:46.196 "f51917c6-de41-4795-a7e0-a6434495afcf" 00:16:46.196 ], 00:16:46.196 "product_name": "Malloc disk", 00:16:46.196 "block_size": 512, 00:16:46.196 "num_blocks": 65536, 00:16:46.196 "uuid": "f51917c6-de41-4795-a7e0-a6434495afcf", 00:16:46.196 "assigned_rate_limits": { 00:16:46.196 "rw_ios_per_sec": 0, 00:16:46.196 "rw_mbytes_per_sec": 0, 00:16:46.196 "r_mbytes_per_sec": 0, 00:16:46.196 "w_mbytes_per_sec": 0 00:16:46.196 }, 00:16:46.196 "claimed": true, 00:16:46.196 "claim_type": "exclusive_write", 00:16:46.196 "zoned": false, 00:16:46.196 "supported_io_types": { 00:16:46.196 "read": true, 00:16:46.196 "write": true, 00:16:46.196 "unmap": true, 00:16:46.196 "write_zeroes": true, 00:16:46.196 "flush": true, 00:16:46.196 "reset": true, 00:16:46.196 "compare": false, 00:16:46.196 "compare_and_write": false, 00:16:46.196 "abort": true, 00:16:46.196 "nvme_admin": false, 00:16:46.196 "nvme_io": false 00:16:46.196 }, 00:16:46.196 "memory_domains": [ 00:16:46.196 { 00:16:46.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.196 "dma_device_type": 2 00:16:46.196 } 00:16:46.196 ], 00:16:46.196 "driver_specific": {} 00:16:46.196 } 00:16:46.196 ] 00:16:46.196 13:40:25 -- common/autotest_common.sh@895 -- # return 0 00:16:46.196 13:40:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:46.196 13:40:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.196 13:40:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:46.196 13:40:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:46.196 13:40:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.197 "name": "Existed_Raid", 00:16:46.197 "uuid": "9b73c223-d3c1-4ccf-892a-341a325dbe2d", 00:16:46.197 "strip_size_kb": 64, 00:16:46.197 "state": "configuring", 00:16:46.197 "raid_level": "raid0", 00:16:46.197 "superblock": true, 00:16:46.197 "num_base_bdevs": 3, 00:16:46.197 "num_base_bdevs_discovered": 1, 00:16:46.197 "num_base_bdevs_operational": 3, 00:16:46.197 "base_bdevs_list": [ 00:16:46.197 { 00:16:46.197 "name": "BaseBdev1", 00:16:46.197 "uuid": "f51917c6-de41-4795-a7e0-a6434495afcf", 00:16:46.197 "is_configured": true, 00:16:46.197 "data_offset": 2048, 00:16:46.197 "data_size": 63488 00:16:46.197 }, 00:16:46.197 { 00:16:46.197 "name": "BaseBdev2", 00:16:46.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.197 "is_configured": false, 00:16:46.197 "data_offset": 0, 00:16:46.197 "data_size": 0 00:16:46.197 }, 00:16:46.197 { 00:16:46.197 "name": "BaseBdev3", 00:16:46.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.197 "is_configured": false, 00:16:46.197 "data_offset": 0, 00:16:46.197 "data_size": 0 00:16:46.197 } 00:16:46.197 ] 00:16:46.197 }' 00:16:46.197 13:40:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.197 13:40:25 -- common/autotest_common.sh@10 -- # set +x 00:16:47.132 13:40:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.132 [2024-07-10 13:40:26.276002] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.132 [2024-07-10 13:40:26.276201] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:47.132 13:40:26 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:47.132 13:40:26 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.391 13:40:26 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.651 BaseBdev1 00:16:47.651 13:40:26 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:47.651 13:40:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:47.651 13:40:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:47.651 13:40:26 -- common/autotest_common.sh@889 -- # local i 00:16:47.651 13:40:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:47.651 13:40:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:47.651 13:40:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:47.651 13:40:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.910 [ 00:16:47.910 { 00:16:47.910 "name": "BaseBdev1", 00:16:47.910 "aliases": [ 00:16:47.910 "00bf497f-14b3-4dcb-9df0-109b5ce9f9c6" 00:16:47.910 ], 00:16:47.910 "product_name": "Malloc disk", 00:16:47.910 "block_size": 512, 00:16:47.910 "num_blocks": 65536, 00:16:47.910 "uuid": "00bf497f-14b3-4dcb-9df0-109b5ce9f9c6", 00:16:47.910 "assigned_rate_limits": { 00:16:47.910 "rw_ios_per_sec": 0, 00:16:47.910 "rw_mbytes_per_sec": 0, 00:16:47.910 "r_mbytes_per_sec": 0, 00:16:47.910 "w_mbytes_per_sec": 0 00:16:47.910 }, 00:16:47.910 "claimed": false, 00:16:47.910 "zoned": false, 00:16:47.910 "supported_io_types": { 00:16:47.910 "read": true, 00:16:47.910 "write": true, 00:16:47.910 "unmap": true, 00:16:47.910 "write_zeroes": true, 00:16:47.910 "flush": true, 00:16:47.910 "reset": true, 00:16:47.910 "compare": false, 00:16:47.910 "compare_and_write": false, 00:16:47.910 "abort": true, 00:16:47.910 "nvme_admin": false, 00:16:47.910 "nvme_io": false 00:16:47.910 }, 00:16:47.910 "memory_domains": [ 00:16:47.910 { 00:16:47.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.910 "dma_device_type": 2 00:16:47.910 } 00:16:47.910 ], 00:16:47.910 "driver_specific": {} 00:16:47.910 } 00:16:47.910 ] 00:16:47.910 13:40:27 -- common/autotest_common.sh@895 -- # return 0 00:16:47.910 13:40:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:48.169 [2024-07-10 13:40:27.338098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.169 [2024-07-10 13:40:27.340179] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.169 [2024-07-10 13:40:27.340269] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.169 [2024-07-10 13:40:27.340292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.169 [2024-07-10 13:40:27.340324] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.169 13:40:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.429 13:40:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.429 "name": "Existed_Raid", 00:16:48.429 "uuid": "d340c4e3-d309-4623-b91c-fa2f785eee34", 00:16:48.429 "strip_size_kb": 64, 00:16:48.429 "state": "configuring", 00:16:48.429 "raid_level": "raid0", 00:16:48.429 "superblock": true, 00:16:48.429 "num_base_bdevs": 3, 00:16:48.429 "num_base_bdevs_discovered": 1, 00:16:48.429 "num_base_bdevs_operational": 3, 00:16:48.429 "base_bdevs_list": [ 00:16:48.429 { 00:16:48.429 "name": "BaseBdev1", 00:16:48.429 "uuid": "00bf497f-14b3-4dcb-9df0-109b5ce9f9c6", 00:16:48.429 "is_configured": true, 00:16:48.429 "data_offset": 2048, 00:16:48.429 "data_size": 63488 00:16:48.429 }, 00:16:48.429 { 00:16:48.429 "name": "BaseBdev2", 00:16:48.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.429 "is_configured": false, 00:16:48.429 "data_offset": 0, 00:16:48.429 "data_size": 0 00:16:48.429 }, 00:16:48.429 { 00:16:48.429 "name": "BaseBdev3", 00:16:48.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.429 "is_configured": false, 00:16:48.429 "data_offset": 0, 00:16:48.429 "data_size": 0 00:16:48.429 } 00:16:48.429 ] 00:16:48.429 }' 00:16:48.429 13:40:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.429 13:40:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.996 13:40:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.255 [2024-07-10 13:40:28.403254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.255 BaseBdev2 00:16:49.255 13:40:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:49.255 13:40:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:49.255 13:40:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:49.255 13:40:28 -- common/autotest_common.sh@889 -- # local i 00:16:49.255 13:40:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:49.255 13:40:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:49.255 13:40:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.255 13:40:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.513 [ 00:16:49.513 { 00:16:49.513 "name": "BaseBdev2", 00:16:49.513 "aliases": [ 00:16:49.513 "4d62f561-9187-437b-986f-ae093df57e2d" 00:16:49.513 ], 00:16:49.513 "product_name": "Malloc disk", 00:16:49.513 "block_size": 512, 00:16:49.513 "num_blocks": 65536, 00:16:49.513 "uuid": "4d62f561-9187-437b-986f-ae093df57e2d", 00:16:49.513 "assigned_rate_limits": { 00:16:49.513 "rw_ios_per_sec": 0, 00:16:49.513 "rw_mbytes_per_sec": 0, 00:16:49.513 "r_mbytes_per_sec": 0, 00:16:49.513 "w_mbytes_per_sec": 0 00:16:49.513 }, 00:16:49.513 "claimed": true, 00:16:49.513 "claim_type": "exclusive_write", 00:16:49.513 "zoned": false, 00:16:49.513 "supported_io_types": { 00:16:49.513 "read": true, 00:16:49.513 "write": true, 00:16:49.513 "unmap": true, 00:16:49.513 "write_zeroes": true, 00:16:49.513 "flush": true, 00:16:49.513 "reset": true, 00:16:49.513 "compare": false, 00:16:49.513 "compare_and_write": false, 00:16:49.513 "abort": true, 00:16:49.513 "nvme_admin": false, 00:16:49.513 "nvme_io": false 00:16:49.513 }, 00:16:49.513 "memory_domains": [ 00:16:49.513 { 00:16:49.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.513 "dma_device_type": 2 00:16:49.513 } 00:16:49.513 ], 00:16:49.513 "driver_specific": {} 00:16:49.513 } 00:16:49.513 ] 00:16:49.513 13:40:28 -- common/autotest_common.sh@895 -- # return 0 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.513 13:40:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.772 13:40:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.772 "name": "Existed_Raid", 00:16:49.772 "uuid": "d340c4e3-d309-4623-b91c-fa2f785eee34", 00:16:49.772 "strip_size_kb": 64, 00:16:49.772 "state": "configuring", 00:16:49.772 "raid_level": "raid0", 00:16:49.772 "superblock": true, 00:16:49.772 "num_base_bdevs": 3, 00:16:49.772 "num_base_bdevs_discovered": 2, 00:16:49.772 "num_base_bdevs_operational": 3, 00:16:49.772 "base_bdevs_list": [ 00:16:49.772 { 00:16:49.772 "name": "BaseBdev1", 00:16:49.772 "uuid": "00bf497f-14b3-4dcb-9df0-109b5ce9f9c6", 00:16:49.772 "is_configured": true, 00:16:49.772 "data_offset": 2048, 00:16:49.772 "data_size": 63488 00:16:49.772 }, 00:16:49.772 { 00:16:49.772 "name": "BaseBdev2", 00:16:49.772 "uuid": "4d62f561-9187-437b-986f-ae093df57e2d", 00:16:49.772 "is_configured": true, 00:16:49.772 "data_offset": 2048, 00:16:49.772 "data_size": 63488 00:16:49.772 }, 00:16:49.772 { 00:16:49.772 "name": "BaseBdev3", 00:16:49.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.772 "is_configured": false, 00:16:49.772 "data_offset": 0, 00:16:49.772 "data_size": 0 00:16:49.772 } 00:16:49.772 ] 00:16:49.772 }' 00:16:49.772 13:40:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.772 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.340 13:40:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.598 [2024-07-10 13:40:29.889162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.598 [2024-07-10 13:40:29.889471] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:50.598 [2024-07-10 13:40:29.889506] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.598 [2024-07-10 13:40:29.889649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:50.599 BaseBdev3 00:16:50.599 [2024-07-10 13:40:29.889972] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:50.599 [2024-07-10 13:40:29.889983] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:50.599 [2024-07-10 13:40:29.890129] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.599 13:40:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:50.599 13:40:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:50.599 13:40:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:50.599 13:40:29 -- common/autotest_common.sh@889 -- # local i 00:16:50.599 13:40:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:50.599 13:40:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:50.599 13:40:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.857 13:40:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:51.123 [ 00:16:51.123 { 00:16:51.123 "name": "BaseBdev3", 00:16:51.123 "aliases": [ 00:16:51.123 "ebae83bc-fce4-4ee1-95c2-bc8133816d79" 00:16:51.123 ], 00:16:51.123 "product_name": "Malloc disk", 00:16:51.123 "block_size": 512, 00:16:51.123 "num_blocks": 65536, 00:16:51.123 "uuid": "ebae83bc-fce4-4ee1-95c2-bc8133816d79", 00:16:51.123 "assigned_rate_limits": { 00:16:51.123 "rw_ios_per_sec": 0, 00:16:51.123 "rw_mbytes_per_sec": 0, 00:16:51.123 "r_mbytes_per_sec": 0, 00:16:51.123 "w_mbytes_per_sec": 0 00:16:51.123 }, 00:16:51.123 "claimed": true, 00:16:51.123 "claim_type": "exclusive_write", 00:16:51.123 "zoned": false, 00:16:51.123 "supported_io_types": { 00:16:51.123 "read": true, 00:16:51.123 "write": true, 00:16:51.123 "unmap": true, 00:16:51.123 "write_zeroes": true, 00:16:51.123 "flush": true, 00:16:51.123 "reset": true, 00:16:51.123 "compare": false, 00:16:51.123 "compare_and_write": false, 00:16:51.123 "abort": true, 00:16:51.123 "nvme_admin": false, 00:16:51.123 "nvme_io": false 00:16:51.123 }, 00:16:51.123 "memory_domains": [ 00:16:51.123 { 00:16:51.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.123 "dma_device_type": 2 00:16:51.123 } 00:16:51.123 ], 00:16:51.123 "driver_specific": {} 00:16:51.123 } 00:16:51.123 ] 00:16:51.123 13:40:30 -- common/autotest_common.sh@895 -- # return 0 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.123 13:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.390 13:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.390 "name": "Existed_Raid", 00:16:51.390 "uuid": "d340c4e3-d309-4623-b91c-fa2f785eee34", 00:16:51.390 "strip_size_kb": 64, 00:16:51.390 "state": "online", 00:16:51.390 "raid_level": "raid0", 00:16:51.390 "superblock": true, 00:16:51.390 "num_base_bdevs": 3, 00:16:51.390 "num_base_bdevs_discovered": 3, 00:16:51.390 "num_base_bdevs_operational": 3, 00:16:51.390 "base_bdevs_list": [ 00:16:51.390 { 00:16:51.390 "name": "BaseBdev1", 00:16:51.390 "uuid": "00bf497f-14b3-4dcb-9df0-109b5ce9f9c6", 00:16:51.390 "is_configured": true, 00:16:51.390 "data_offset": 2048, 00:16:51.390 "data_size": 63488 00:16:51.390 }, 00:16:51.390 { 00:16:51.390 "name": "BaseBdev2", 00:16:51.390 "uuid": "4d62f561-9187-437b-986f-ae093df57e2d", 00:16:51.390 "is_configured": true, 00:16:51.390 "data_offset": 2048, 00:16:51.390 "data_size": 63488 00:16:51.390 }, 00:16:51.390 { 00:16:51.390 "name": "BaseBdev3", 00:16:51.390 "uuid": "ebae83bc-fce4-4ee1-95c2-bc8133816d79", 00:16:51.390 "is_configured": true, 00:16:51.390 "data_offset": 2048, 00:16:51.390 "data_size": 63488 00:16:51.390 } 00:16:51.390 ] 00:16:51.390 }' 00:16:51.390 13:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.390 13:40:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.957 13:40:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:52.216 [2024-07-10 13:40:31.338781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.216 [2024-07-10 13:40:31.338898] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.216 [2024-07-10 13:40:31.338999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.216 13:40:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.475 13:40:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.475 "name": "Existed_Raid", 00:16:52.475 "uuid": "d340c4e3-d309-4623-b91c-fa2f785eee34", 00:16:52.475 "strip_size_kb": 64, 00:16:52.475 "state": "offline", 00:16:52.475 "raid_level": "raid0", 00:16:52.475 "superblock": true, 00:16:52.475 "num_base_bdevs": 3, 00:16:52.475 "num_base_bdevs_discovered": 2, 00:16:52.475 "num_base_bdevs_operational": 2, 00:16:52.475 "base_bdevs_list": [ 00:16:52.475 { 00:16:52.475 "name": null, 00:16:52.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.475 "is_configured": false, 00:16:52.475 "data_offset": 2048, 00:16:52.475 "data_size": 63488 00:16:52.475 }, 00:16:52.475 { 00:16:52.475 "name": "BaseBdev2", 00:16:52.475 "uuid": "4d62f561-9187-437b-986f-ae093df57e2d", 00:16:52.475 "is_configured": true, 00:16:52.475 "data_offset": 2048, 00:16:52.475 "data_size": 63488 00:16:52.475 }, 00:16:52.475 { 00:16:52.475 "name": "BaseBdev3", 00:16:52.475 "uuid": "ebae83bc-fce4-4ee1-95c2-bc8133816d79", 00:16:52.475 "is_configured": true, 00:16:52.475 "data_offset": 2048, 00:16:52.475 "data_size": 63488 00:16:52.475 } 00:16:52.475 ] 00:16:52.475 }' 00:16:52.475 13:40:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.475 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:16:53.042 13:40:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:53.042 13:40:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:53.042 13:40:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.042 13:40:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:53.300 13:40:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:53.300 13:40:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.300 13:40:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:53.300 [2024-07-10 13:40:32.583242] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.558 13:40:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:53.816 [2024-07-10 13:40:33.054812] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.816 [2024-07-10 13:40:33.054966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:54.074 13:40:33 -- bdev/bdev_raid.sh@287 -- # killprocess 118399 00:16:54.074 13:40:33 -- common/autotest_common.sh@926 -- # '[' -z 118399 ']' 00:16:54.074 13:40:33 -- common/autotest_common.sh@930 -- # kill -0 118399 00:16:54.074 13:40:33 -- common/autotest_common.sh@931 -- # uname 00:16:54.074 13:40:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.074 13:40:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118399 00:16:54.074 killing process with pid 118399 00:16:54.074 13:40:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:54.074 13:40:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:54.074 13:40:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118399' 00:16:54.074 13:40:33 -- common/autotest_common.sh@945 -- # kill 118399 00:16:54.074 13:40:33 -- common/autotest_common.sh@950 -- # wait 118399 00:16:54.074 [2024-07-10 13:40:33.387608] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.074 [2024-07-10 13:40:33.387731] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.449 ************************************ 00:16:55.449 END TEST raid_state_function_test_sb 00:16:55.449 ************************************ 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:55.449 00:16:55.449 real 0m12.167s 00:16:55.449 user 0m20.968s 00:16:55.449 sys 0m1.524s 00:16:55.449 13:40:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.449 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:55.449 13:40:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:55.449 13:40:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:55.449 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 ************************************ 00:16:55.449 START TEST raid_superblock_test 00:16:55.449 ************************************ 00:16:55.449 13:40:34 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@357 -- # raid_pid=118808 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:55.449 13:40:34 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118808 /var/tmp/spdk-raid.sock 00:16:55.449 13:40:34 -- common/autotest_common.sh@819 -- # '[' -z 118808 ']' 00:16:55.449 13:40:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:55.449 13:40:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.449 13:40:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:55.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:55.449 13:40:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.449 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:16:55.708 [2024-07-10 13:40:34.816894] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:55.708 [2024-07-10 13:40:34.817119] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118808 ] 00:16:55.708 [2024-07-10 13:40:34.990637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.967 [2024-07-10 13:40:35.191903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.227 [2024-07-10 13:40:35.396757] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.486 13:40:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.486 13:40:35 -- common/autotest_common.sh@852 -- # return 0 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.486 13:40:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:56.486 malloc1 00:16:56.755 13:40:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.755 [2024-07-10 13:40:36.015355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.755 [2024-07-10 13:40:36.015536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.755 [2024-07-10 13:40:36.015579] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:56.755 [2024-07-10 13:40:36.015632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.755 [2024-07-10 13:40:36.017660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.755 [2024-07-10 13:40:36.017756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.755 pt1 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.755 13:40:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:57.028 malloc2 00:16:57.028 13:40:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.288 [2024-07-10 13:40:36.464414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.288 [2024-07-10 13:40:36.464579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.288 [2024-07-10 13:40:36.464634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:57.288 [2024-07-10 13:40:36.464700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.288 [2024-07-10 13:40:36.466665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.288 [2024-07-10 13:40:36.466761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.288 pt2 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.288 13:40:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:57.548 malloc3 00:16:57.548 13:40:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.548 [2024-07-10 13:40:36.881994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.548 [2024-07-10 13:40:36.882169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.548 [2024-07-10 13:40:36.882218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:57.548 [2024-07-10 13:40:36.882270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.548 [2024-07-10 13:40:36.884247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.548 [2024-07-10 13:40:36.884330] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.548 pt3 00:16:57.548 13:40:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:57.548 13:40:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:57.548 13:40:36 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:57.808 [2024-07-10 13:40:37.077703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.808 [2024-07-10 13:40:37.079408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.808 [2024-07-10 13:40:37.079515] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.808 [2024-07-10 13:40:37.079693] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:57.808 [2024-07-10 13:40:37.079735] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.808 [2024-07-10 13:40:37.079894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:57.808 [2024-07-10 13:40:37.080229] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:57.808 [2024-07-10 13:40:37.080271] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:57.808 [2024-07-10 13:40:37.080419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.808 13:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.068 13:40:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.068 "name": "raid_bdev1", 00:16:58.068 "uuid": "b54cd359-fbc2-4f6e-a720-1fe1a9f19206", 00:16:58.068 "strip_size_kb": 64, 00:16:58.068 "state": "online", 00:16:58.068 "raid_level": "raid0", 00:16:58.068 "superblock": true, 00:16:58.068 "num_base_bdevs": 3, 00:16:58.068 "num_base_bdevs_discovered": 3, 00:16:58.068 "num_base_bdevs_operational": 3, 00:16:58.068 "base_bdevs_list": [ 00:16:58.068 { 00:16:58.068 "name": "pt1", 00:16:58.068 "uuid": "df8b909e-5d81-56c6-83ea-ea58beeb1ff7", 00:16:58.068 "is_configured": true, 00:16:58.068 "data_offset": 2048, 00:16:58.068 "data_size": 63488 00:16:58.068 }, 00:16:58.068 { 00:16:58.068 "name": "pt2", 00:16:58.068 "uuid": "31e3f455-2b22-553c-b463-691c982243ae", 00:16:58.068 "is_configured": true, 00:16:58.068 "data_offset": 2048, 00:16:58.068 "data_size": 63488 00:16:58.068 }, 00:16:58.068 { 00:16:58.068 "name": "pt3", 00:16:58.068 "uuid": "a3ad66f9-cf3d-5fc2-83f1-089d0a04d0b7", 00:16:58.068 "is_configured": true, 00:16:58.068 "data_offset": 2048, 00:16:58.068 "data_size": 63488 00:16:58.068 } 00:16:58.068 ] 00:16:58.068 }' 00:16:58.068 13:40:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.068 13:40:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.637 13:40:37 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:58.637 13:40:37 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:58.896 [2024-07-10 13:40:38.028146] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.896 13:40:38 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b54cd359-fbc2-4f6e-a720-1fe1a9f19206 00:16:58.896 13:40:38 -- bdev/bdev_raid.sh@380 -- # '[' -z b54cd359-fbc2-4f6e-a720-1fe1a9f19206 ']' 00:16:58.896 13:40:38 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.896 [2024-07-10 13:40:38.223633] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.896 [2024-07-10 13:40:38.223732] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.896 [2024-07-10 13:40:38.223841] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.896 [2024-07-10 13:40:38.223914] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.896 [2024-07-10 13:40:38.223950] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:58.896 13:40:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.896 13:40:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:59.156 13:40:38 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:59.156 13:40:38 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:59.156 13:40:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.156 13:40:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:59.415 13:40:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.415 13:40:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:59.675 13:40:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.675 13:40:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:59.935 13:40:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:59.935 13:40:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:59.935 13:40:39 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:59.935 13:40:39 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:59.935 13:40:39 -- common/autotest_common.sh@640 -- # local es=0 00:16:59.935 13:40:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:59.935 13:40:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.935 13:40:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.935 13:40:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.935 13:40:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.935 13:40:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.935 13:40:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.935 13:40:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.935 13:40:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:59.935 13:40:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:00.194 [2024-07-10 13:40:39.437512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:00.194 [2024-07-10 13:40:39.439436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:00.194 [2024-07-10 13:40:39.439521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:00.194 [2024-07-10 13:40:39.439616] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:00.194 [2024-07-10 13:40:39.439730] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:00.194 [2024-07-10 13:40:39.439788] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:00.194 [2024-07-10 13:40:39.439876] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.194 [2024-07-10 13:40:39.439912] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:00.195 request: 00:17:00.195 { 00:17:00.195 "name": "raid_bdev1", 00:17:00.195 "raid_level": "raid0", 00:17:00.195 "base_bdevs": [ 00:17:00.195 "malloc1", 00:17:00.195 "malloc2", 00:17:00.195 "malloc3" 00:17:00.195 ], 00:17:00.195 "superblock": false, 00:17:00.195 "strip_size_kb": 64, 00:17:00.195 "method": "bdev_raid_create", 00:17:00.195 "req_id": 1 00:17:00.195 } 00:17:00.195 Got JSON-RPC error response 00:17:00.195 response: 00:17:00.195 { 00:17:00.195 "code": -17, 00:17:00.195 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:00.195 } 00:17:00.195 13:40:39 -- common/autotest_common.sh@643 -- # es=1 00:17:00.195 13:40:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:00.195 13:40:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:00.195 13:40:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:00.195 13:40:39 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.195 13:40:39 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:00.454 13:40:39 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:00.454 13:40:39 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:00.454 13:40:39 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.791 [2024-07-10 13:40:39.864736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.791 [2024-07-10 13:40:39.864924] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.791 [2024-07-10 13:40:39.864975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:00.791 [2024-07-10 13:40:39.865014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.791 [2024-07-10 13:40:39.867259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.791 [2024-07-10 13:40:39.867361] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.791 [2024-07-10 13:40:39.867524] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:00.791 [2024-07-10 13:40:39.867620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.791 pt1 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.791 13:40:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.791 13:40:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.791 "name": "raid_bdev1", 00:17:00.791 "uuid": "b54cd359-fbc2-4f6e-a720-1fe1a9f19206", 00:17:00.791 "strip_size_kb": 64, 00:17:00.791 "state": "configuring", 00:17:00.791 "raid_level": "raid0", 00:17:00.791 "superblock": true, 00:17:00.791 "num_base_bdevs": 3, 00:17:00.791 "num_base_bdevs_discovered": 1, 00:17:00.791 "num_base_bdevs_operational": 3, 00:17:00.791 "base_bdevs_list": [ 00:17:00.791 { 00:17:00.791 "name": "pt1", 00:17:00.791 "uuid": "df8b909e-5d81-56c6-83ea-ea58beeb1ff7", 00:17:00.791 "is_configured": true, 00:17:00.791 "data_offset": 2048, 00:17:00.791 "data_size": 63488 00:17:00.791 }, 00:17:00.791 { 00:17:00.791 "name": null, 00:17:00.791 "uuid": "31e3f455-2b22-553c-b463-691c982243ae", 00:17:00.791 "is_configured": false, 00:17:00.791 "data_offset": 2048, 00:17:00.791 "data_size": 63488 00:17:00.791 }, 00:17:00.791 { 00:17:00.791 "name": null, 00:17:00.791 "uuid": "a3ad66f9-cf3d-5fc2-83f1-089d0a04d0b7", 00:17:00.791 "is_configured": false, 00:17:00.791 "data_offset": 2048, 00:17:00.791 "data_size": 63488 00:17:00.791 } 00:17:00.791 ] 00:17:00.791 }' 00:17:00.791 13:40:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.791 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:17:01.787 13:40:40 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:01.787 13:40:40 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.787 [2024-07-10 13:40:40.930948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.787 [2024-07-10 13:40:40.931115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.787 [2024-07-10 13:40:40.931202] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:01.787 [2024-07-10 13:40:40.931246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.787 [2024-07-10 13:40:40.931754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.787 [2024-07-10 13:40:40.931828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.787 [2024-07-10 13:40:40.931988] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:01.787 [2024-07-10 13:40:40.932043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.787 pt2 00:17:01.787 13:40:40 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:02.068 [2024-07-10 13:40:41.130623] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.068 "name": "raid_bdev1", 00:17:02.068 "uuid": "b54cd359-fbc2-4f6e-a720-1fe1a9f19206", 00:17:02.068 "strip_size_kb": 64, 00:17:02.068 "state": "configuring", 00:17:02.068 "raid_level": "raid0", 00:17:02.068 "superblock": true, 00:17:02.068 "num_base_bdevs": 3, 00:17:02.068 "num_base_bdevs_discovered": 1, 00:17:02.068 "num_base_bdevs_operational": 3, 00:17:02.068 "base_bdevs_list": [ 00:17:02.068 { 00:17:02.068 "name": "pt1", 00:17:02.068 "uuid": "df8b909e-5d81-56c6-83ea-ea58beeb1ff7", 00:17:02.068 "is_configured": true, 00:17:02.068 "data_offset": 2048, 00:17:02.068 "data_size": 63488 00:17:02.068 }, 00:17:02.068 { 00:17:02.068 "name": null, 00:17:02.068 "uuid": "31e3f455-2b22-553c-b463-691c982243ae", 00:17:02.068 "is_configured": false, 00:17:02.068 "data_offset": 2048, 00:17:02.068 "data_size": 63488 00:17:02.068 }, 00:17:02.068 { 00:17:02.068 "name": null, 00:17:02.068 "uuid": "a3ad66f9-cf3d-5fc2-83f1-089d0a04d0b7", 00:17:02.068 "is_configured": false, 00:17:02.068 "data_offset": 2048, 00:17:02.068 "data_size": 63488 00:17:02.068 } 00:17:02.068 ] 00:17:02.068 }' 00:17:02.068 13:40:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.068 13:40:41 -- common/autotest_common.sh@10 -- # set +x 00:17:03.081 13:40:42 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:03.081 13:40:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:03.081 13:40:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.081 [2024-07-10 13:40:42.240648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.081 [2024-07-10 13:40:42.240827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.081 [2024-07-10 13:40:42.240878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:03.081 [2024-07-10 13:40:42.240963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.081 [2024-07-10 13:40:42.241502] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.081 [2024-07-10 13:40:42.241576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.081 [2024-07-10 13:40:42.241726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:03.081 [2024-07-10 13:40:42.241774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.081 pt2 00:17:03.081 13:40:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:03.081 13:40:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:03.081 13:40:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.434 [2024-07-10 13:40:42.436316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.434 [2024-07-10 13:40:42.436463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.434 [2024-07-10 13:40:42.436528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:03.434 [2024-07-10 13:40:42.436569] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.434 [2024-07-10 13:40:42.437032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.434 [2024-07-10 13:40:42.437100] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.434 [2024-07-10 13:40:42.437260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:03.434 [2024-07-10 13:40:42.437311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.434 [2024-07-10 13:40:42.437450] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:03.434 [2024-07-10 13:40:42.437484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.434 [2024-07-10 13:40:42.437628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:03.434 [2024-07-10 13:40:42.437989] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:03.434 [2024-07-10 13:40:42.438034] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:03.434 [2024-07-10 13:40:42.438213] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.434 pt3 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.434 "name": "raid_bdev1", 00:17:03.434 "uuid": "b54cd359-fbc2-4f6e-a720-1fe1a9f19206", 00:17:03.434 "strip_size_kb": 64, 00:17:03.434 "state": "online", 00:17:03.434 "raid_level": "raid0", 00:17:03.434 "superblock": true, 00:17:03.434 "num_base_bdevs": 3, 00:17:03.434 "num_base_bdevs_discovered": 3, 00:17:03.434 "num_base_bdevs_operational": 3, 00:17:03.434 "base_bdevs_list": [ 00:17:03.434 { 00:17:03.434 "name": "pt1", 00:17:03.434 "uuid": "df8b909e-5d81-56c6-83ea-ea58beeb1ff7", 00:17:03.434 "is_configured": true, 00:17:03.434 "data_offset": 2048, 00:17:03.434 "data_size": 63488 00:17:03.434 }, 00:17:03.434 { 00:17:03.434 "name": "pt2", 00:17:03.434 "uuid": "31e3f455-2b22-553c-b463-691c982243ae", 00:17:03.434 "is_configured": true, 00:17:03.434 "data_offset": 2048, 00:17:03.434 "data_size": 63488 00:17:03.434 }, 00:17:03.434 { 00:17:03.434 "name": "pt3", 00:17:03.434 "uuid": "a3ad66f9-cf3d-5fc2-83f1-089d0a04d0b7", 00:17:03.434 "is_configured": true, 00:17:03.434 "data_offset": 2048, 00:17:03.434 "data_size": 63488 00:17:03.434 } 00:17:03.434 ] 00:17:03.434 }' 00:17:03.434 13:40:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.434 13:40:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.015 13:40:43 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:04.015 13:40:43 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:04.274 [2024-07-10 13:40:43.518695] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.275 13:40:43 -- bdev/bdev_raid.sh@430 -- # '[' b54cd359-fbc2-4f6e-a720-1fe1a9f19206 '!=' b54cd359-fbc2-4f6e-a720-1fe1a9f19206 ']' 00:17:04.275 13:40:43 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:04.275 13:40:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:04.275 13:40:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:04.275 13:40:43 -- bdev/bdev_raid.sh@511 -- # killprocess 118808 00:17:04.275 13:40:43 -- common/autotest_common.sh@926 -- # '[' -z 118808 ']' 00:17:04.275 13:40:43 -- common/autotest_common.sh@930 -- # kill -0 118808 00:17:04.275 13:40:43 -- common/autotest_common.sh@931 -- # uname 00:17:04.275 13:40:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.275 13:40:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118808 00:17:04.275 killing process with pid 118808 00:17:04.275 13:40:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:04.275 13:40:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:04.275 13:40:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118808' 00:17:04.275 13:40:43 -- common/autotest_common.sh@945 -- # kill 118808 00:17:04.275 13:40:43 -- common/autotest_common.sh@950 -- # wait 118808 00:17:04.275 [2024-07-10 13:40:43.561906] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.275 [2024-07-10 13:40:43.562007] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.275 [2024-07-10 13:40:43.562065] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.275 [2024-07-10 13:40:43.562108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:04.843 [2024-07-10 13:40:43.892671] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.222 ************************************ 00:17:06.222 END TEST raid_superblock_test 00:17:06.222 ************************************ 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:06.222 00:17:06.222 real 0m10.586s 00:17:06.222 user 0m17.977s 00:17:06.222 sys 0m1.231s 00:17:06.222 13:40:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.222 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:06.222 13:40:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:06.222 13:40:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:06.222 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.222 ************************************ 00:17:06.222 START TEST raid_state_function_test 00:17:06.222 ************************************ 00:17:06.222 13:40:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=119131 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119131' 00:17:06.222 Process raid pid: 119131 00:17:06.222 13:40:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119131 /var/tmp/spdk-raid.sock 00:17:06.222 13:40:45 -- common/autotest_common.sh@819 -- # '[' -z 119131 ']' 00:17:06.222 13:40:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.222 13:40:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:06.222 13:40:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.222 13:40:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:06.222 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.222 [2024-07-10 13:40:45.477167] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:06.222 [2024-07-10 13:40:45.477350] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.481 [2024-07-10 13:40:45.639288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.740 [2024-07-10 13:40:45.844233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.740 [2024-07-10 13:40:46.066646] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.999 13:40:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.999 13:40:46 -- common/autotest_common.sh@852 -- # return 0 00:17:06.999 13:40:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.278 [2024-07-10 13:40:46.528035] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.278 [2024-07-10 13:40:46.528178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.278 [2024-07-10 13:40:46.528214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.278 [2024-07-10 13:40:46.528244] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.278 [2024-07-10 13:40:46.528263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.278 [2024-07-10 13:40:46.528331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.278 13:40:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.537 13:40:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.537 "name": "Existed_Raid", 00:17:07.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.537 "strip_size_kb": 64, 00:17:07.538 "state": "configuring", 00:17:07.538 "raid_level": "concat", 00:17:07.538 "superblock": false, 00:17:07.538 "num_base_bdevs": 3, 00:17:07.538 "num_base_bdevs_discovered": 0, 00:17:07.538 "num_base_bdevs_operational": 3, 00:17:07.538 "base_bdevs_list": [ 00:17:07.538 { 00:17:07.538 "name": "BaseBdev1", 00:17:07.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.538 "is_configured": false, 00:17:07.538 "data_offset": 0, 00:17:07.538 "data_size": 0 00:17:07.538 }, 00:17:07.538 { 00:17:07.538 "name": "BaseBdev2", 00:17:07.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.538 "is_configured": false, 00:17:07.538 "data_offset": 0, 00:17:07.538 "data_size": 0 00:17:07.538 }, 00:17:07.538 { 00:17:07.538 "name": "BaseBdev3", 00:17:07.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.538 "is_configured": false, 00:17:07.538 "data_offset": 0, 00:17:07.538 "data_size": 0 00:17:07.538 } 00:17:07.538 ] 00:17:07.538 }' 00:17:07.538 13:40:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.538 13:40:46 -- common/autotest_common.sh@10 -- # set +x 00:17:08.105 13:40:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.364 [2024-07-10 13:40:47.550133] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.364 [2024-07-10 13:40:47.550272] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:08.364 13:40:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:08.624 [2024-07-10 13:40:47.757784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.624 [2024-07-10 13:40:47.757904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.624 [2024-07-10 13:40:47.757938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.624 [2024-07-10 13:40:47.757968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.624 [2024-07-10 13:40:47.757985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.624 [2024-07-10 13:40:47.758049] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.624 13:40:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.883 [2024-07-10 13:40:47.996839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.883 BaseBdev1 00:17:08.883 13:40:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:08.883 13:40:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:08.883 13:40:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:08.883 13:40:48 -- common/autotest_common.sh@889 -- # local i 00:17:08.883 13:40:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:08.883 13:40:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:08.883 13:40:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.142 13:40:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.142 [ 00:17:09.142 { 00:17:09.142 "name": "BaseBdev1", 00:17:09.142 "aliases": [ 00:17:09.142 "106bad14-9369-499f-9703-53e540889e32" 00:17:09.142 ], 00:17:09.142 "product_name": "Malloc disk", 00:17:09.142 "block_size": 512, 00:17:09.142 "num_blocks": 65536, 00:17:09.142 "uuid": "106bad14-9369-499f-9703-53e540889e32", 00:17:09.142 "assigned_rate_limits": { 00:17:09.142 "rw_ios_per_sec": 0, 00:17:09.142 "rw_mbytes_per_sec": 0, 00:17:09.142 "r_mbytes_per_sec": 0, 00:17:09.142 "w_mbytes_per_sec": 0 00:17:09.142 }, 00:17:09.142 "claimed": true, 00:17:09.142 "claim_type": "exclusive_write", 00:17:09.142 "zoned": false, 00:17:09.142 "supported_io_types": { 00:17:09.142 "read": true, 00:17:09.142 "write": true, 00:17:09.142 "unmap": true, 00:17:09.142 "write_zeroes": true, 00:17:09.142 "flush": true, 00:17:09.142 "reset": true, 00:17:09.142 "compare": false, 00:17:09.142 "compare_and_write": false, 00:17:09.142 "abort": true, 00:17:09.142 "nvme_admin": false, 00:17:09.142 "nvme_io": false 00:17:09.142 }, 00:17:09.142 "memory_domains": [ 00:17:09.142 { 00:17:09.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.142 "dma_device_type": 2 00:17:09.142 } 00:17:09.142 ], 00:17:09.142 "driver_specific": {} 00:17:09.142 } 00:17:09.142 ] 00:17:09.142 13:40:48 -- common/autotest_common.sh@895 -- # return 0 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.142 13:40:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.402 13:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.402 "name": "Existed_Raid", 00:17:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.402 "strip_size_kb": 64, 00:17:09.402 "state": "configuring", 00:17:09.402 "raid_level": "concat", 00:17:09.402 "superblock": false, 00:17:09.402 "num_base_bdevs": 3, 00:17:09.402 "num_base_bdevs_discovered": 1, 00:17:09.402 "num_base_bdevs_operational": 3, 00:17:09.402 "base_bdevs_list": [ 00:17:09.402 { 00:17:09.402 "name": "BaseBdev1", 00:17:09.402 "uuid": "106bad14-9369-499f-9703-53e540889e32", 00:17:09.402 "is_configured": true, 00:17:09.402 "data_offset": 0, 00:17:09.402 "data_size": 65536 00:17:09.402 }, 00:17:09.402 { 00:17:09.402 "name": "BaseBdev2", 00:17:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.402 "is_configured": false, 00:17:09.402 "data_offset": 0, 00:17:09.402 "data_size": 0 00:17:09.402 }, 00:17:09.402 { 00:17:09.402 "name": "BaseBdev3", 00:17:09.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.402 "is_configured": false, 00:17:09.402 "data_offset": 0, 00:17:09.402 "data_size": 0 00:17:09.402 } 00:17:09.402 ] 00:17:09.402 }' 00:17:09.402 13:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.402 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:17:10.004 13:40:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:10.264 [2024-07-10 13:40:49.402414] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.264 [2024-07-10 13:40:49.402557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:10.264 [2024-07-10 13:40:49.598132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.264 [2024-07-10 13:40:49.600029] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.264 [2024-07-10 13:40:49.600151] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.264 [2024-07-10 13:40:49.600185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:10.264 [2024-07-10 13:40:49.600222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.264 13:40:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.523 13:40:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.523 "name": "Existed_Raid", 00:17:10.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.523 "strip_size_kb": 64, 00:17:10.523 "state": "configuring", 00:17:10.523 "raid_level": "concat", 00:17:10.523 "superblock": false, 00:17:10.523 "num_base_bdevs": 3, 00:17:10.523 "num_base_bdevs_discovered": 1, 00:17:10.523 "num_base_bdevs_operational": 3, 00:17:10.523 "base_bdevs_list": [ 00:17:10.523 { 00:17:10.523 "name": "BaseBdev1", 00:17:10.523 "uuid": "106bad14-9369-499f-9703-53e540889e32", 00:17:10.523 "is_configured": true, 00:17:10.523 "data_offset": 0, 00:17:10.523 "data_size": 65536 00:17:10.523 }, 00:17:10.523 { 00:17:10.523 "name": "BaseBdev2", 00:17:10.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.523 "is_configured": false, 00:17:10.523 "data_offset": 0, 00:17:10.523 "data_size": 0 00:17:10.523 }, 00:17:10.523 { 00:17:10.523 "name": "BaseBdev3", 00:17:10.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.523 "is_configured": false, 00:17:10.523 "data_offset": 0, 00:17:10.523 "data_size": 0 00:17:10.523 } 00:17:10.523 ] 00:17:10.523 }' 00:17:10.523 13:40:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.523 13:40:49 -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 13:40:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:11.461 [2024-07-10 13:40:50.738883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.461 BaseBdev2 00:17:11.461 13:40:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:11.461 13:40:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:11.461 13:40:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:11.461 13:40:50 -- common/autotest_common.sh@889 -- # local i 00:17:11.461 13:40:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:11.461 13:40:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:11.461 13:40:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.720 13:40:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:11.979 [ 00:17:11.979 { 00:17:11.979 "name": "BaseBdev2", 00:17:11.979 "aliases": [ 00:17:11.979 "70cea566-0f34-4db1-832e-a1b744b8e730" 00:17:11.979 ], 00:17:11.979 "product_name": "Malloc disk", 00:17:11.979 "block_size": 512, 00:17:11.979 "num_blocks": 65536, 00:17:11.979 "uuid": "70cea566-0f34-4db1-832e-a1b744b8e730", 00:17:11.979 "assigned_rate_limits": { 00:17:11.979 "rw_ios_per_sec": 0, 00:17:11.979 "rw_mbytes_per_sec": 0, 00:17:11.979 "r_mbytes_per_sec": 0, 00:17:11.979 "w_mbytes_per_sec": 0 00:17:11.979 }, 00:17:11.979 "claimed": true, 00:17:11.979 "claim_type": "exclusive_write", 00:17:11.979 "zoned": false, 00:17:11.979 "supported_io_types": { 00:17:11.979 "read": true, 00:17:11.979 "write": true, 00:17:11.979 "unmap": true, 00:17:11.979 "write_zeroes": true, 00:17:11.979 "flush": true, 00:17:11.979 "reset": true, 00:17:11.979 "compare": false, 00:17:11.979 "compare_and_write": false, 00:17:11.979 "abort": true, 00:17:11.979 "nvme_admin": false, 00:17:11.979 "nvme_io": false 00:17:11.979 }, 00:17:11.979 "memory_domains": [ 00:17:11.979 { 00:17:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.979 "dma_device_type": 2 00:17:11.979 } 00:17:11.979 ], 00:17:11.979 "driver_specific": {} 00:17:11.979 } 00:17:11.979 ] 00:17:11.979 13:40:51 -- common/autotest_common.sh@895 -- # return 0 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.979 13:40:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.238 13:40:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.238 "name": "Existed_Raid", 00:17:12.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.238 "strip_size_kb": 64, 00:17:12.238 "state": "configuring", 00:17:12.238 "raid_level": "concat", 00:17:12.238 "superblock": false, 00:17:12.238 "num_base_bdevs": 3, 00:17:12.238 "num_base_bdevs_discovered": 2, 00:17:12.238 "num_base_bdevs_operational": 3, 00:17:12.238 "base_bdevs_list": [ 00:17:12.238 { 00:17:12.238 "name": "BaseBdev1", 00:17:12.238 "uuid": "106bad14-9369-499f-9703-53e540889e32", 00:17:12.238 "is_configured": true, 00:17:12.238 "data_offset": 0, 00:17:12.238 "data_size": 65536 00:17:12.238 }, 00:17:12.238 { 00:17:12.238 "name": "BaseBdev2", 00:17:12.238 "uuid": "70cea566-0f34-4db1-832e-a1b744b8e730", 00:17:12.238 "is_configured": true, 00:17:12.239 "data_offset": 0, 00:17:12.239 "data_size": 65536 00:17:12.239 }, 00:17:12.239 { 00:17:12.239 "name": "BaseBdev3", 00:17:12.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.239 "is_configured": false, 00:17:12.239 "data_offset": 0, 00:17:12.239 "data_size": 0 00:17:12.239 } 00:17:12.239 ] 00:17:12.239 }' 00:17:12.239 13:40:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.239 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:17:12.808 13:40:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:13.067 [2024-07-10 13:40:52.174990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.067 [2024-07-10 13:40:52.175105] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:13.067 [2024-07-10 13:40:52.175125] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:13.067 [2024-07-10 13:40:52.175269] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:13.067 [2024-07-10 13:40:52.175586] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:13.067 [2024-07-10 13:40:52.175630] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:13.067 [2024-07-10 13:40:52.175876] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.067 BaseBdev3 00:17:13.067 13:40:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:13.067 13:40:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:13.067 13:40:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:13.067 13:40:52 -- common/autotest_common.sh@889 -- # local i 00:17:13.067 13:40:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:13.067 13:40:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:13.067 13:40:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.067 13:40:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:13.328 [ 00:17:13.328 { 00:17:13.328 "name": "BaseBdev3", 00:17:13.328 "aliases": [ 00:17:13.328 "4129ba86-d0fa-41b4-bc36-43624cc78a7e" 00:17:13.328 ], 00:17:13.328 "product_name": "Malloc disk", 00:17:13.328 "block_size": 512, 00:17:13.328 "num_blocks": 65536, 00:17:13.328 "uuid": "4129ba86-d0fa-41b4-bc36-43624cc78a7e", 00:17:13.328 "assigned_rate_limits": { 00:17:13.328 "rw_ios_per_sec": 0, 00:17:13.328 "rw_mbytes_per_sec": 0, 00:17:13.328 "r_mbytes_per_sec": 0, 00:17:13.328 "w_mbytes_per_sec": 0 00:17:13.328 }, 00:17:13.328 "claimed": true, 00:17:13.328 "claim_type": "exclusive_write", 00:17:13.328 "zoned": false, 00:17:13.328 "supported_io_types": { 00:17:13.328 "read": true, 00:17:13.328 "write": true, 00:17:13.328 "unmap": true, 00:17:13.328 "write_zeroes": true, 00:17:13.328 "flush": true, 00:17:13.328 "reset": true, 00:17:13.328 "compare": false, 00:17:13.328 "compare_and_write": false, 00:17:13.328 "abort": true, 00:17:13.328 "nvme_admin": false, 00:17:13.328 "nvme_io": false 00:17:13.328 }, 00:17:13.328 "memory_domains": [ 00:17:13.328 { 00:17:13.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.328 "dma_device_type": 2 00:17:13.328 } 00:17:13.328 ], 00:17:13.328 "driver_specific": {} 00:17:13.328 } 00:17:13.328 ] 00:17:13.328 13:40:52 -- common/autotest_common.sh@895 -- # return 0 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.328 13:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.587 13:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.587 "name": "Existed_Raid", 00:17:13.587 "uuid": "47340bb4-de58-435e-9b22-def9597f2635", 00:17:13.587 "strip_size_kb": 64, 00:17:13.587 "state": "online", 00:17:13.587 "raid_level": "concat", 00:17:13.587 "superblock": false, 00:17:13.587 "num_base_bdevs": 3, 00:17:13.587 "num_base_bdevs_discovered": 3, 00:17:13.587 "num_base_bdevs_operational": 3, 00:17:13.587 "base_bdevs_list": [ 00:17:13.587 { 00:17:13.587 "name": "BaseBdev1", 00:17:13.587 "uuid": "106bad14-9369-499f-9703-53e540889e32", 00:17:13.587 "is_configured": true, 00:17:13.587 "data_offset": 0, 00:17:13.587 "data_size": 65536 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "BaseBdev2", 00:17:13.587 "uuid": "70cea566-0f34-4db1-832e-a1b744b8e730", 00:17:13.587 "is_configured": true, 00:17:13.587 "data_offset": 0, 00:17:13.587 "data_size": 65536 00:17:13.587 }, 00:17:13.587 { 00:17:13.587 "name": "BaseBdev3", 00:17:13.587 "uuid": "4129ba86-d0fa-41b4-bc36-43624cc78a7e", 00:17:13.587 "is_configured": true, 00:17:13.587 "data_offset": 0, 00:17:13.587 "data_size": 65536 00:17:13.587 } 00:17:13.587 ] 00:17:13.587 }' 00:17:13.587 13:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.587 13:40:52 -- common/autotest_common.sh@10 -- # set +x 00:17:14.154 13:40:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:14.413 [2024-07-10 13:40:53.610792] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.413 [2024-07-10 13:40:53.610906] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.413 [2024-07-10 13:40:53.610997] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.413 13:40:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.673 13:40:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.673 "name": "Existed_Raid", 00:17:14.673 "uuid": "47340bb4-de58-435e-9b22-def9597f2635", 00:17:14.673 "strip_size_kb": 64, 00:17:14.673 "state": "offline", 00:17:14.673 "raid_level": "concat", 00:17:14.673 "superblock": false, 00:17:14.673 "num_base_bdevs": 3, 00:17:14.673 "num_base_bdevs_discovered": 2, 00:17:14.673 "num_base_bdevs_operational": 2, 00:17:14.673 "base_bdevs_list": [ 00:17:14.673 { 00:17:14.673 "name": null, 00:17:14.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.673 "is_configured": false, 00:17:14.673 "data_offset": 0, 00:17:14.673 "data_size": 65536 00:17:14.673 }, 00:17:14.673 { 00:17:14.673 "name": "BaseBdev2", 00:17:14.673 "uuid": "70cea566-0f34-4db1-832e-a1b744b8e730", 00:17:14.673 "is_configured": true, 00:17:14.673 "data_offset": 0, 00:17:14.673 "data_size": 65536 00:17:14.673 }, 00:17:14.673 { 00:17:14.673 "name": "BaseBdev3", 00:17:14.673 "uuid": "4129ba86-d0fa-41b4-bc36-43624cc78a7e", 00:17:14.673 "is_configured": true, 00:17:14.673 "data_offset": 0, 00:17:14.673 "data_size": 65536 00:17:14.673 } 00:17:14.673 ] 00:17:14.673 }' 00:17:14.673 13:40:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.673 13:40:53 -- common/autotest_common.sh@10 -- # set +x 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.611 13:40:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:15.870 [2024-07-10 13:40:54.979741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:15.870 13:40:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:15.870 13:40:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.870 13:40:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.870 13:40:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:16.143 13:40:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:16.143 13:40:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.143 13:40:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:16.143 [2024-07-10 13:40:55.456781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:16.143 [2024-07-10 13:40:55.456972] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:16.426 13:40:55 -- bdev/bdev_raid.sh@287 -- # killprocess 119131 00:17:16.426 13:40:55 -- common/autotest_common.sh@926 -- # '[' -z 119131 ']' 00:17:16.426 13:40:55 -- common/autotest_common.sh@930 -- # kill -0 119131 00:17:16.427 13:40:55 -- common/autotest_common.sh@931 -- # uname 00:17:16.687 13:40:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.687 13:40:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119131 00:17:16.687 killing process with pid 119131 00:17:16.687 13:40:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:16.687 13:40:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:16.687 13:40:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119131' 00:17:16.687 13:40:55 -- common/autotest_common.sh@945 -- # kill 119131 00:17:16.687 13:40:55 -- common/autotest_common.sh@950 -- # wait 119131 00:17:16.687 [2024-07-10 13:40:55.805739] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.687 [2024-07-10 13:40:55.805994] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.066 ************************************ 00:17:18.066 END TEST raid_state_function_test 00:17:18.066 ************************************ 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:18.066 00:17:18.066 real 0m11.855s 00:17:18.066 user 0m20.405s 00:17:18.066 sys 0m1.403s 00:17:18.066 13:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.066 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:18.066 13:40:57 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:18.066 13:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:18.066 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.066 ************************************ 00:17:18.066 START TEST raid_state_function_test_sb 00:17:18.066 ************************************ 00:17:18.066 13:40:57 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=119521 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119521' 00:17:18.066 Process raid pid: 119521 00:17:18.066 13:40:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119521 /var/tmp/spdk-raid.sock 00:17:18.066 13:40:57 -- common/autotest_common.sh@819 -- # '[' -z 119521 ']' 00:17:18.066 13:40:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.066 13:40:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:18.066 13:40:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.066 13:40:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:18.066 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.066 [2024-07-10 13:40:57.403214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:18.066 [2024-07-10 13:40:57.403442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.326 [2024-07-10 13:40:57.566624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.585 [2024-07-10 13:40:57.781763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.844 [2024-07-10 13:40:58.000514] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.103 13:40:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.103 13:40:58 -- common/autotest_common.sh@852 -- # return 0 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:19.104 [2024-07-10 13:40:58.423281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.104 [2024-07-10 13:40:58.423421] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.104 [2024-07-10 13:40:58.423468] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.104 [2024-07-10 13:40:58.423496] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.104 [2024-07-10 13:40:58.423512] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.104 [2024-07-10 13:40:58.423559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.104 13:40:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.363 13:40:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.363 "name": "Existed_Raid", 00:17:19.363 "uuid": "4eb5a46e-86f0-4e7c-9e84-b06966aefe13", 00:17:19.363 "strip_size_kb": 64, 00:17:19.363 "state": "configuring", 00:17:19.363 "raid_level": "concat", 00:17:19.363 "superblock": true, 00:17:19.363 "num_base_bdevs": 3, 00:17:19.363 "num_base_bdevs_discovered": 0, 00:17:19.363 "num_base_bdevs_operational": 3, 00:17:19.363 "base_bdevs_list": [ 00:17:19.363 { 00:17:19.363 "name": "BaseBdev1", 00:17:19.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.363 "is_configured": false, 00:17:19.363 "data_offset": 0, 00:17:19.363 "data_size": 0 00:17:19.363 }, 00:17:19.363 { 00:17:19.363 "name": "BaseBdev2", 00:17:19.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.363 "is_configured": false, 00:17:19.363 "data_offset": 0, 00:17:19.363 "data_size": 0 00:17:19.363 }, 00:17:19.363 { 00:17:19.363 "name": "BaseBdev3", 00:17:19.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.363 "is_configured": false, 00:17:19.363 "data_offset": 0, 00:17:19.363 "data_size": 0 00:17:19.363 } 00:17:19.363 ] 00:17:19.363 }' 00:17:19.363 13:40:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.363 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:17:20.302 13:40:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.302 [2024-07-10 13:40:59.481260] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.302 [2024-07-10 13:40:59.481370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:20.302 13:40:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:20.562 [2024-07-10 13:40:59.660996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.562 [2024-07-10 13:40:59.661108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.562 [2024-07-10 13:40:59.661136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.562 [2024-07-10 13:40:59.661177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.562 [2024-07-10 13:40:59.661199] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.562 [2024-07-10 13:40:59.661237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.562 13:40:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.562 [2024-07-10 13:40:59.875697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.562 BaseBdev1 00:17:20.562 13:40:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:20.562 13:40:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:20.562 13:40:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:20.562 13:40:59 -- common/autotest_common.sh@889 -- # local i 00:17:20.562 13:40:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:20.562 13:40:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:20.562 13:40:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.822 13:41:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.082 [ 00:17:21.082 { 00:17:21.082 "name": "BaseBdev1", 00:17:21.082 "aliases": [ 00:17:21.082 "b69bbff1-cb76-474c-9778-158117c8ba09" 00:17:21.082 ], 00:17:21.082 "product_name": "Malloc disk", 00:17:21.082 "block_size": 512, 00:17:21.082 "num_blocks": 65536, 00:17:21.082 "uuid": "b69bbff1-cb76-474c-9778-158117c8ba09", 00:17:21.082 "assigned_rate_limits": { 00:17:21.082 "rw_ios_per_sec": 0, 00:17:21.082 "rw_mbytes_per_sec": 0, 00:17:21.082 "r_mbytes_per_sec": 0, 00:17:21.082 "w_mbytes_per_sec": 0 00:17:21.082 }, 00:17:21.082 "claimed": true, 00:17:21.082 "claim_type": "exclusive_write", 00:17:21.082 "zoned": false, 00:17:21.082 "supported_io_types": { 00:17:21.082 "read": true, 00:17:21.082 "write": true, 00:17:21.082 "unmap": true, 00:17:21.082 "write_zeroes": true, 00:17:21.082 "flush": true, 00:17:21.082 "reset": true, 00:17:21.082 "compare": false, 00:17:21.082 "compare_and_write": false, 00:17:21.082 "abort": true, 00:17:21.082 "nvme_admin": false, 00:17:21.082 "nvme_io": false 00:17:21.082 }, 00:17:21.082 "memory_domains": [ 00:17:21.082 { 00:17:21.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.082 "dma_device_type": 2 00:17:21.082 } 00:17:21.082 ], 00:17:21.082 "driver_specific": {} 00:17:21.082 } 00:17:21.082 ] 00:17:21.082 13:41:00 -- common/autotest_common.sh@895 -- # return 0 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.082 "name": "Existed_Raid", 00:17:21.082 "uuid": "958907f4-8bfc-4dcb-8e10-78664392a0af", 00:17:21.082 "strip_size_kb": 64, 00:17:21.082 "state": "configuring", 00:17:21.082 "raid_level": "concat", 00:17:21.082 "superblock": true, 00:17:21.082 "num_base_bdevs": 3, 00:17:21.082 "num_base_bdevs_discovered": 1, 00:17:21.082 "num_base_bdevs_operational": 3, 00:17:21.082 "base_bdevs_list": [ 00:17:21.082 { 00:17:21.082 "name": "BaseBdev1", 00:17:21.082 "uuid": "b69bbff1-cb76-474c-9778-158117c8ba09", 00:17:21.082 "is_configured": true, 00:17:21.082 "data_offset": 2048, 00:17:21.082 "data_size": 63488 00:17:21.082 }, 00:17:21.082 { 00:17:21.082 "name": "BaseBdev2", 00:17:21.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.082 "is_configured": false, 00:17:21.082 "data_offset": 0, 00:17:21.082 "data_size": 0 00:17:21.082 }, 00:17:21.082 { 00:17:21.082 "name": "BaseBdev3", 00:17:21.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.082 "is_configured": false, 00:17:21.082 "data_offset": 0, 00:17:21.082 "data_size": 0 00:17:21.082 } 00:17:21.082 ] 00:17:21.082 }' 00:17:21.082 13:41:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.082 13:41:00 -- common/autotest_common.sh@10 -- # set +x 00:17:22.028 13:41:01 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.028 [2024-07-10 13:41:01.197355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.028 [2024-07-10 13:41:01.197485] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:22.028 13:41:01 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:22.028 13:41:01 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:22.287 13:41:01 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:22.545 BaseBdev1 00:17:22.545 13:41:01 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:22.545 13:41:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:22.545 13:41:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:22.545 13:41:01 -- common/autotest_common.sh@889 -- # local i 00:17:22.545 13:41:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:22.545 13:41:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:22.545 13:41:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.545 13:41:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.803 [ 00:17:22.803 { 00:17:22.803 "name": "BaseBdev1", 00:17:22.803 "aliases": [ 00:17:22.803 "743b6ec0-1d82-4298-8a37-a5a2e7771df6" 00:17:22.803 ], 00:17:22.803 "product_name": "Malloc disk", 00:17:22.803 "block_size": 512, 00:17:22.803 "num_blocks": 65536, 00:17:22.803 "uuid": "743b6ec0-1d82-4298-8a37-a5a2e7771df6", 00:17:22.803 "assigned_rate_limits": { 00:17:22.803 "rw_ios_per_sec": 0, 00:17:22.803 "rw_mbytes_per_sec": 0, 00:17:22.803 "r_mbytes_per_sec": 0, 00:17:22.803 "w_mbytes_per_sec": 0 00:17:22.803 }, 00:17:22.803 "claimed": false, 00:17:22.803 "zoned": false, 00:17:22.803 "supported_io_types": { 00:17:22.803 "read": true, 00:17:22.803 "write": true, 00:17:22.803 "unmap": true, 00:17:22.803 "write_zeroes": true, 00:17:22.803 "flush": true, 00:17:22.803 "reset": true, 00:17:22.803 "compare": false, 00:17:22.803 "compare_and_write": false, 00:17:22.803 "abort": true, 00:17:22.803 "nvme_admin": false, 00:17:22.803 "nvme_io": false 00:17:22.803 }, 00:17:22.803 "memory_domains": [ 00:17:22.803 { 00:17:22.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.803 "dma_device_type": 2 00:17:22.803 } 00:17:22.803 ], 00:17:22.803 "driver_specific": {} 00:17:22.803 } 00:17:22.803 ] 00:17:22.803 13:41:02 -- common/autotest_common.sh@895 -- # return 0 00:17:22.803 13:41:02 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:23.060 [2024-07-10 13:41:02.237759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.060 [2024-07-10 13:41:02.239502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.060 [2024-07-10 13:41:02.239582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.060 [2024-07-10 13:41:02.239627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.060 [2024-07-10 13:41:02.239663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.060 13:41:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.317 13:41:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.317 "name": "Existed_Raid", 00:17:23.317 "uuid": "b588017b-814f-4fdb-8c0d-a8d49a5818f3", 00:17:23.317 "strip_size_kb": 64, 00:17:23.317 "state": "configuring", 00:17:23.317 "raid_level": "concat", 00:17:23.317 "superblock": true, 00:17:23.317 "num_base_bdevs": 3, 00:17:23.317 "num_base_bdevs_discovered": 1, 00:17:23.317 "num_base_bdevs_operational": 3, 00:17:23.317 "base_bdevs_list": [ 00:17:23.317 { 00:17:23.317 "name": "BaseBdev1", 00:17:23.317 "uuid": "743b6ec0-1d82-4298-8a37-a5a2e7771df6", 00:17:23.317 "is_configured": true, 00:17:23.317 "data_offset": 2048, 00:17:23.317 "data_size": 63488 00:17:23.317 }, 00:17:23.317 { 00:17:23.317 "name": "BaseBdev2", 00:17:23.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.317 "is_configured": false, 00:17:23.317 "data_offset": 0, 00:17:23.317 "data_size": 0 00:17:23.317 }, 00:17:23.317 { 00:17:23.317 "name": "BaseBdev3", 00:17:23.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.318 "is_configured": false, 00:17:23.318 "data_offset": 0, 00:17:23.318 "data_size": 0 00:17:23.318 } 00:17:23.318 ] 00:17:23.318 }' 00:17:23.318 13:41:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.318 13:41:02 -- common/autotest_common.sh@10 -- # set +x 00:17:23.882 13:41:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:24.139 [2024-07-10 13:41:03.289252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.139 BaseBdev2 00:17:24.139 13:41:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:24.139 13:41:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:24.139 13:41:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.139 13:41:03 -- common/autotest_common.sh@889 -- # local i 00:17:24.139 13:41:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.139 13:41:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.139 13:41:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.139 13:41:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.400 [ 00:17:24.400 { 00:17:24.400 "name": "BaseBdev2", 00:17:24.400 "aliases": [ 00:17:24.400 "4abc4684-ac85-4ec7-bf3b-2fbb0d995ae4" 00:17:24.400 ], 00:17:24.400 "product_name": "Malloc disk", 00:17:24.400 "block_size": 512, 00:17:24.400 "num_blocks": 65536, 00:17:24.400 "uuid": "4abc4684-ac85-4ec7-bf3b-2fbb0d995ae4", 00:17:24.400 "assigned_rate_limits": { 00:17:24.400 "rw_ios_per_sec": 0, 00:17:24.400 "rw_mbytes_per_sec": 0, 00:17:24.400 "r_mbytes_per_sec": 0, 00:17:24.400 "w_mbytes_per_sec": 0 00:17:24.400 }, 00:17:24.400 "claimed": true, 00:17:24.400 "claim_type": "exclusive_write", 00:17:24.400 "zoned": false, 00:17:24.400 "supported_io_types": { 00:17:24.400 "read": true, 00:17:24.400 "write": true, 00:17:24.400 "unmap": true, 00:17:24.400 "write_zeroes": true, 00:17:24.400 "flush": true, 00:17:24.400 "reset": true, 00:17:24.400 "compare": false, 00:17:24.400 "compare_and_write": false, 00:17:24.400 "abort": true, 00:17:24.400 "nvme_admin": false, 00:17:24.400 "nvme_io": false 00:17:24.400 }, 00:17:24.400 "memory_domains": [ 00:17:24.400 { 00:17:24.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.400 "dma_device_type": 2 00:17:24.400 } 00:17:24.400 ], 00:17:24.400 "driver_specific": {} 00:17:24.400 } 00:17:24.400 ] 00:17:24.400 13:41:03 -- common/autotest_common.sh@895 -- # return 0 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.400 13:41:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.658 13:41:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.658 "name": "Existed_Raid", 00:17:24.658 "uuid": "b588017b-814f-4fdb-8c0d-a8d49a5818f3", 00:17:24.658 "strip_size_kb": 64, 00:17:24.658 "state": "configuring", 00:17:24.658 "raid_level": "concat", 00:17:24.658 "superblock": true, 00:17:24.658 "num_base_bdevs": 3, 00:17:24.658 "num_base_bdevs_discovered": 2, 00:17:24.658 "num_base_bdevs_operational": 3, 00:17:24.658 "base_bdevs_list": [ 00:17:24.658 { 00:17:24.658 "name": "BaseBdev1", 00:17:24.658 "uuid": "743b6ec0-1d82-4298-8a37-a5a2e7771df6", 00:17:24.658 "is_configured": true, 00:17:24.658 "data_offset": 2048, 00:17:24.658 "data_size": 63488 00:17:24.658 }, 00:17:24.658 { 00:17:24.658 "name": "BaseBdev2", 00:17:24.658 "uuid": "4abc4684-ac85-4ec7-bf3b-2fbb0d995ae4", 00:17:24.658 "is_configured": true, 00:17:24.658 "data_offset": 2048, 00:17:24.658 "data_size": 63488 00:17:24.658 }, 00:17:24.658 { 00:17:24.658 "name": "BaseBdev3", 00:17:24.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.658 "is_configured": false, 00:17:24.658 "data_offset": 0, 00:17:24.658 "data_size": 0 00:17:24.658 } 00:17:24.658 ] 00:17:24.658 }' 00:17:24.658 13:41:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.658 13:41:03 -- common/autotest_common.sh@10 -- # set +x 00:17:25.224 13:41:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:25.483 [2024-07-10 13:41:04.679166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.483 [2024-07-10 13:41:04.679454] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:25.483 [2024-07-10 13:41:04.679496] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:25.483 [2024-07-10 13:41:04.679639] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:25.483 BaseBdev3 00:17:25.483 [2024-07-10 13:41:04.679943] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:25.483 [2024-07-10 13:41:04.679985] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:25.483 [2024-07-10 13:41:04.680163] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.483 13:41:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:25.483 13:41:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:25.483 13:41:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:25.483 13:41:04 -- common/autotest_common.sh@889 -- # local i 00:17:25.483 13:41:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:25.483 13:41:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:25.483 13:41:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.743 13:41:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.743 [ 00:17:25.743 { 00:17:25.743 "name": "BaseBdev3", 00:17:25.743 "aliases": [ 00:17:25.743 "8c54d305-fdab-4f03-b2ac-76279033ac5a" 00:17:25.743 ], 00:17:25.743 "product_name": "Malloc disk", 00:17:25.743 "block_size": 512, 00:17:25.743 "num_blocks": 65536, 00:17:25.743 "uuid": "8c54d305-fdab-4f03-b2ac-76279033ac5a", 00:17:25.743 "assigned_rate_limits": { 00:17:25.743 "rw_ios_per_sec": 0, 00:17:25.743 "rw_mbytes_per_sec": 0, 00:17:25.743 "r_mbytes_per_sec": 0, 00:17:25.743 "w_mbytes_per_sec": 0 00:17:25.743 }, 00:17:25.743 "claimed": true, 00:17:25.743 "claim_type": "exclusive_write", 00:17:25.743 "zoned": false, 00:17:25.743 "supported_io_types": { 00:17:25.743 "read": true, 00:17:25.743 "write": true, 00:17:25.743 "unmap": true, 00:17:25.743 "write_zeroes": true, 00:17:25.743 "flush": true, 00:17:25.743 "reset": true, 00:17:25.743 "compare": false, 00:17:25.743 "compare_and_write": false, 00:17:25.743 "abort": true, 00:17:25.743 "nvme_admin": false, 00:17:25.743 "nvme_io": false 00:17:25.743 }, 00:17:25.743 "memory_domains": [ 00:17:25.743 { 00:17:25.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.743 "dma_device_type": 2 00:17:25.743 } 00:17:25.743 ], 00:17:25.743 "driver_specific": {} 00:17:25.743 } 00:17:25.743 ] 00:17:25.743 13:41:05 -- common/autotest_common.sh@895 -- # return 0 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.743 13:41:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.003 13:41:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.003 "name": "Existed_Raid", 00:17:26.003 "uuid": "b588017b-814f-4fdb-8c0d-a8d49a5818f3", 00:17:26.003 "strip_size_kb": 64, 00:17:26.003 "state": "online", 00:17:26.003 "raid_level": "concat", 00:17:26.003 "superblock": true, 00:17:26.003 "num_base_bdevs": 3, 00:17:26.003 "num_base_bdevs_discovered": 3, 00:17:26.003 "num_base_bdevs_operational": 3, 00:17:26.003 "base_bdevs_list": [ 00:17:26.003 { 00:17:26.003 "name": "BaseBdev1", 00:17:26.003 "uuid": "743b6ec0-1d82-4298-8a37-a5a2e7771df6", 00:17:26.003 "is_configured": true, 00:17:26.003 "data_offset": 2048, 00:17:26.003 "data_size": 63488 00:17:26.003 }, 00:17:26.003 { 00:17:26.003 "name": "BaseBdev2", 00:17:26.003 "uuid": "4abc4684-ac85-4ec7-bf3b-2fbb0d995ae4", 00:17:26.003 "is_configured": true, 00:17:26.003 "data_offset": 2048, 00:17:26.003 "data_size": 63488 00:17:26.003 }, 00:17:26.003 { 00:17:26.003 "name": "BaseBdev3", 00:17:26.003 "uuid": "8c54d305-fdab-4f03-b2ac-76279033ac5a", 00:17:26.003 "is_configured": true, 00:17:26.003 "data_offset": 2048, 00:17:26.003 "data_size": 63488 00:17:26.003 } 00:17:26.003 ] 00:17:26.003 }' 00:17:26.003 13:41:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.003 13:41:05 -- common/autotest_common.sh@10 -- # set +x 00:17:26.571 13:41:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.831 [2024-07-10 13:41:06.032886] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.831 [2024-07-10 13:41:06.032965] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.831 [2024-07-10 13:41:06.033056] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.831 13:41:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.090 13:41:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.090 "name": "Existed_Raid", 00:17:27.090 "uuid": "b588017b-814f-4fdb-8c0d-a8d49a5818f3", 00:17:27.090 "strip_size_kb": 64, 00:17:27.090 "state": "offline", 00:17:27.090 "raid_level": "concat", 00:17:27.090 "superblock": true, 00:17:27.090 "num_base_bdevs": 3, 00:17:27.090 "num_base_bdevs_discovered": 2, 00:17:27.090 "num_base_bdevs_operational": 2, 00:17:27.090 "base_bdevs_list": [ 00:17:27.090 { 00:17:27.090 "name": null, 00:17:27.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.090 "is_configured": false, 00:17:27.090 "data_offset": 2048, 00:17:27.090 "data_size": 63488 00:17:27.090 }, 00:17:27.090 { 00:17:27.090 "name": "BaseBdev2", 00:17:27.090 "uuid": "4abc4684-ac85-4ec7-bf3b-2fbb0d995ae4", 00:17:27.090 "is_configured": true, 00:17:27.090 "data_offset": 2048, 00:17:27.090 "data_size": 63488 00:17:27.090 }, 00:17:27.090 { 00:17:27.090 "name": "BaseBdev3", 00:17:27.090 "uuid": "8c54d305-fdab-4f03-b2ac-76279033ac5a", 00:17:27.090 "is_configured": true, 00:17:27.090 "data_offset": 2048, 00:17:27.090 "data_size": 63488 00:17:27.090 } 00:17:27.090 ] 00:17:27.090 }' 00:17:27.090 13:41:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.090 13:41:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.659 13:41:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:27.659 13:41:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.659 13:41:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.659 13:41:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.917 13:41:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.917 13:41:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.917 13:41:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:28.177 [2024-07-10 13:41:07.336559] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.177 13:41:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.177 13:41:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.177 13:41:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.177 13:41:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:28.435 13:41:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:28.435 13:41:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.435 13:41:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.695 [2024-07-10 13:41:07.790753] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.695 [2024-07-10 13:41:07.790896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:28.695 13:41:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.695 13:41:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.695 13:41:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.695 13:41:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.954 13:41:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:28.955 13:41:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:28.955 13:41:08 -- bdev/bdev_raid.sh@287 -- # killprocess 119521 00:17:28.955 13:41:08 -- common/autotest_common.sh@926 -- # '[' -z 119521 ']' 00:17:28.955 13:41:08 -- common/autotest_common.sh@930 -- # kill -0 119521 00:17:28.955 13:41:08 -- common/autotest_common.sh@931 -- # uname 00:17:28.955 13:41:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.955 13:41:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119521 00:17:28.955 killing process with pid 119521 00:17:28.955 13:41:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.955 13:41:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.955 13:41:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119521' 00:17:28.955 13:41:08 -- common/autotest_common.sh@945 -- # kill 119521 00:17:28.955 13:41:08 -- common/autotest_common.sh@950 -- # wait 119521 00:17:28.955 [2024-07-10 13:41:08.126633] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.955 [2024-07-10 13:41:08.126807] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.332 ************************************ 00:17:30.332 END TEST raid_state_function_test_sb 00:17:30.332 ************************************ 00:17:30.332 13:41:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:30.332 00:17:30.332 real 0m12.074s 00:17:30.333 user 0m21.019s 00:17:30.333 sys 0m1.387s 00:17:30.333 13:41:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.333 13:41:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:30.333 13:41:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:30.333 13:41:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:30.333 13:41:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.333 ************************************ 00:17:30.333 START TEST raid_superblock_test 00:17:30.333 ************************************ 00:17:30.333 13:41:09 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=119915 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:30.333 13:41:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119915 /var/tmp/spdk-raid.sock 00:17:30.333 13:41:09 -- common/autotest_common.sh@819 -- # '[' -z 119915 ']' 00:17:30.333 13:41:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.333 13:41:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.333 13:41:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.333 13:41:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.333 13:41:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.333 [2024-07-10 13:41:09.542552] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:30.333 [2024-07-10 13:41:09.542746] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119915 ] 00:17:30.591 [2024-07-10 13:41:09.701496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.591 [2024-07-10 13:41:09.904431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.850 [2024-07-10 13:41:10.102004] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.109 13:41:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.109 13:41:10 -- common/autotest_common.sh@852 -- # return 0 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.109 13:41:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:31.370 malloc1 00:17:31.370 13:41:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.628 [2024-07-10 13:41:10.785987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.628 [2024-07-10 13:41:10.786131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.628 [2024-07-10 13:41:10.786182] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:31.628 [2024-07-10 13:41:10.786248] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.628 [2024-07-10 13:41:10.788500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.629 [2024-07-10 13:41:10.788581] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.629 pt1 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.629 13:41:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:31.886 malloc2 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.887 [2024-07-10 13:41:11.204926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.887 [2024-07-10 13:41:11.205086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.887 [2024-07-10 13:41:11.205139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:31.887 [2024-07-10 13:41:11.205196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.887 [2024-07-10 13:41:11.207132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.887 [2024-07-10 13:41:11.207208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.887 pt2 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.887 13:41:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:32.145 malloc3 00:17:32.145 13:41:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:32.404 [2024-07-10 13:41:11.576599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:32.404 [2024-07-10 13:41:11.576747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.404 [2024-07-10 13:41:11.576816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:32.404 [2024-07-10 13:41:11.576869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.404 [2024-07-10 13:41:11.578810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.404 [2024-07-10 13:41:11.578893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:32.404 pt3 00:17:32.404 13:41:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:32.404 13:41:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:32.404 13:41:11 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:32.663 [2024-07-10 13:41:11.760343] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.663 [2024-07-10 13:41:11.762288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.663 [2024-07-10 13:41:11.762397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:32.663 [2024-07-10 13:41:11.762599] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:32.663 [2024-07-10 13:41:11.762661] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:32.663 [2024-07-10 13:41:11.762853] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:32.663 [2024-07-10 13:41:11.763225] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:32.663 [2024-07-10 13:41:11.763269] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:32.663 [2024-07-10 13:41:11.763451] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.663 13:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.663 "name": "raid_bdev1", 00:17:32.663 "uuid": "17b3aacb-bd61-4e47-9333-e614e0651034", 00:17:32.663 "strip_size_kb": 64, 00:17:32.663 "state": "online", 00:17:32.663 "raid_level": "concat", 00:17:32.663 "superblock": true, 00:17:32.663 "num_base_bdevs": 3, 00:17:32.664 "num_base_bdevs_discovered": 3, 00:17:32.664 "num_base_bdevs_operational": 3, 00:17:32.664 "base_bdevs_list": [ 00:17:32.664 { 00:17:32.664 "name": "pt1", 00:17:32.664 "uuid": "6cd64223-6aab-522a-8fb5-7bf2d15eec0f", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 2048, 00:17:32.664 "data_size": 63488 00:17:32.664 }, 00:17:32.664 { 00:17:32.664 "name": "pt2", 00:17:32.664 "uuid": "d9001971-2e05-5af6-9baa-34752c02895e", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 2048, 00:17:32.664 "data_size": 63488 00:17:32.664 }, 00:17:32.664 { 00:17:32.664 "name": "pt3", 00:17:32.664 "uuid": "0c5f9a00-f825-557c-8f18-ca05a4f89a21", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 2048, 00:17:32.664 "data_size": 63488 00:17:32.664 } 00:17:32.664 ] 00:17:32.664 }' 00:17:32.664 13:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.664 13:41:11 -- common/autotest_common.sh@10 -- # set +x 00:17:33.602 13:41:12 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:33.602 13:41:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:33.602 [2024-07-10 13:41:12.778787] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.602 13:41:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=17b3aacb-bd61-4e47-9333-e614e0651034 00:17:33.602 13:41:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 17b3aacb-bd61-4e47-9333-e614e0651034 ']' 00:17:33.602 13:41:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.860 [2024-07-10 13:41:12.962223] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.860 [2024-07-10 13:41:12.962347] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.860 [2024-07-10 13:41:12.962485] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.860 [2024-07-10 13:41:12.962573] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.860 [2024-07-10 13:41:12.962607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:33.860 13:41:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.860 13:41:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:33.860 13:41:13 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:33.860 13:41:13 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:33.860 13:41:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.860 13:41:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:34.120 13:41:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.120 13:41:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:34.379 13:41:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.379 13:41:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.638 13:41:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:34.638 13:41:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.638 13:41:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:34.638 13:41:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.638 13:41:13 -- common/autotest_common.sh@640 -- # local es=0 00:17:34.639 13:41:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.639 13:41:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.639 13:41:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.639 13:41:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.639 13:41:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.639 13:41:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.639 13:41:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.639 13:41:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.639 13:41:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.639 13:41:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.899 [2024-07-10 13:41:14.068253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.899 [2024-07-10 13:41:14.070208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.899 [2024-07-10 13:41:14.070305] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:34.899 [2024-07-10 13:41:14.070370] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:34.899 [2024-07-10 13:41:14.070481] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:34.899 [2024-07-10 13:41:14.070536] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:34.899 [2024-07-10 13:41:14.070596] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.899 [2024-07-10 13:41:14.070624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:34.899 request: 00:17:34.899 { 00:17:34.899 "name": "raid_bdev1", 00:17:34.899 "raid_level": "concat", 00:17:34.899 "base_bdevs": [ 00:17:34.899 "malloc1", 00:17:34.899 "malloc2", 00:17:34.899 "malloc3" 00:17:34.899 ], 00:17:34.899 "superblock": false, 00:17:34.899 "strip_size_kb": 64, 00:17:34.899 "method": "bdev_raid_create", 00:17:34.899 "req_id": 1 00:17:34.899 } 00:17:34.899 Got JSON-RPC error response 00:17:34.899 response: 00:17:34.899 { 00:17:34.899 "code": -17, 00:17:34.899 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.899 } 00:17:34.899 13:41:14 -- common/autotest_common.sh@643 -- # es=1 00:17:34.899 13:41:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:34.899 13:41:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:34.899 13:41:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:34.899 13:41:14 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.899 13:41:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.158 [2024-07-10 13:41:14.467466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.158 [2024-07-10 13:41:14.467642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.158 [2024-07-10 13:41:14.467702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:35.158 [2024-07-10 13:41:14.467748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.158 [2024-07-10 13:41:14.470259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.158 [2024-07-10 13:41:14.470344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.158 [2024-07-10 13:41:14.470504] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:35.158 [2024-07-10 13:41:14.470614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.158 pt1 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.158 13:41:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.159 13:41:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.159 13:41:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.159 13:41:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.159 13:41:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.418 13:41:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.418 "name": "raid_bdev1", 00:17:35.418 "uuid": "17b3aacb-bd61-4e47-9333-e614e0651034", 00:17:35.418 "strip_size_kb": 64, 00:17:35.418 "state": "configuring", 00:17:35.418 "raid_level": "concat", 00:17:35.418 "superblock": true, 00:17:35.418 "num_base_bdevs": 3, 00:17:35.418 "num_base_bdevs_discovered": 1, 00:17:35.418 "num_base_bdevs_operational": 3, 00:17:35.418 "base_bdevs_list": [ 00:17:35.418 { 00:17:35.418 "name": "pt1", 00:17:35.418 "uuid": "6cd64223-6aab-522a-8fb5-7bf2d15eec0f", 00:17:35.418 "is_configured": true, 00:17:35.418 "data_offset": 2048, 00:17:35.418 "data_size": 63488 00:17:35.418 }, 00:17:35.418 { 00:17:35.418 "name": null, 00:17:35.418 "uuid": "d9001971-2e05-5af6-9baa-34752c02895e", 00:17:35.418 "is_configured": false, 00:17:35.418 "data_offset": 2048, 00:17:35.418 "data_size": 63488 00:17:35.418 }, 00:17:35.418 { 00:17:35.418 "name": null, 00:17:35.418 "uuid": "0c5f9a00-f825-557c-8f18-ca05a4f89a21", 00:17:35.418 "is_configured": false, 00:17:35.418 "data_offset": 2048, 00:17:35.418 "data_size": 63488 00:17:35.418 } 00:17:35.418 ] 00:17:35.418 }' 00:17:35.418 13:41:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.418 13:41:14 -- common/autotest_common.sh@10 -- # set +x 00:17:35.986 13:41:15 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:35.986 13:41:15 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.246 [2024-07-10 13:41:15.521652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.246 [2024-07-10 13:41:15.521836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.246 [2024-07-10 13:41:15.521905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:36.246 [2024-07-10 13:41:15.521949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.246 [2024-07-10 13:41:15.522474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.246 [2024-07-10 13:41:15.522538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.246 [2024-07-10 13:41:15.522691] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:36.246 [2024-07-10 13:41:15.522744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.246 pt2 00:17:36.246 13:41:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:36.505 [2024-07-10 13:41:15.729264] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.505 13:41:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.764 13:41:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.764 "name": "raid_bdev1", 00:17:36.764 "uuid": "17b3aacb-bd61-4e47-9333-e614e0651034", 00:17:36.764 "strip_size_kb": 64, 00:17:36.764 "state": "configuring", 00:17:36.764 "raid_level": "concat", 00:17:36.765 "superblock": true, 00:17:36.765 "num_base_bdevs": 3, 00:17:36.765 "num_base_bdevs_discovered": 1, 00:17:36.765 "num_base_bdevs_operational": 3, 00:17:36.765 "base_bdevs_list": [ 00:17:36.765 { 00:17:36.765 "name": "pt1", 00:17:36.765 "uuid": "6cd64223-6aab-522a-8fb5-7bf2d15eec0f", 00:17:36.765 "is_configured": true, 00:17:36.765 "data_offset": 2048, 00:17:36.765 "data_size": 63488 00:17:36.765 }, 00:17:36.765 { 00:17:36.765 "name": null, 00:17:36.765 "uuid": "d9001971-2e05-5af6-9baa-34752c02895e", 00:17:36.765 "is_configured": false, 00:17:36.765 "data_offset": 2048, 00:17:36.765 "data_size": 63488 00:17:36.765 }, 00:17:36.765 { 00:17:36.765 "name": null, 00:17:36.765 "uuid": "0c5f9a00-f825-557c-8f18-ca05a4f89a21", 00:17:36.765 "is_configured": false, 00:17:36.765 "data_offset": 2048, 00:17:36.765 "data_size": 63488 00:17:36.765 } 00:17:36.765 ] 00:17:36.765 }' 00:17:36.765 13:41:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.765 13:41:15 -- common/autotest_common.sh@10 -- # set +x 00:17:37.404 13:41:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:37.404 13:41:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.404 13:41:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.404 [2024-07-10 13:41:16.755516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.404 [2024-07-10 13:41:16.755692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.404 [2024-07-10 13:41:16.755745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:37.404 [2024-07-10 13:41:16.755817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.404 [2024-07-10 13:41:16.756343] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.404 [2024-07-10 13:41:16.756414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.404 [2024-07-10 13:41:16.756576] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:37.404 [2024-07-10 13:41:16.756625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.663 pt2 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.663 [2024-07-10 13:41:16.943198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.663 [2024-07-10 13:41:16.943326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.663 [2024-07-10 13:41:16.943372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:37.663 [2024-07-10 13:41:16.943410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.663 [2024-07-10 13:41:16.943850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.663 [2024-07-10 13:41:16.943914] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.663 [2024-07-10 13:41:16.944064] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:37.663 [2024-07-10 13:41:16.944146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.663 [2024-07-10 13:41:16.944291] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:37.663 [2024-07-10 13:41:16.944324] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:37.663 [2024-07-10 13:41:16.944472] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:37.663 [2024-07-10 13:41:16.944780] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:37.663 [2024-07-10 13:41:16.944818] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:37.663 [2024-07-10 13:41:16.944968] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.663 pt3 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.663 13:41:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.922 13:41:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.922 "name": "raid_bdev1", 00:17:37.922 "uuid": "17b3aacb-bd61-4e47-9333-e614e0651034", 00:17:37.922 "strip_size_kb": 64, 00:17:37.922 "state": "online", 00:17:37.922 "raid_level": "concat", 00:17:37.922 "superblock": true, 00:17:37.922 "num_base_bdevs": 3, 00:17:37.922 "num_base_bdevs_discovered": 3, 00:17:37.922 "num_base_bdevs_operational": 3, 00:17:37.922 "base_bdevs_list": [ 00:17:37.922 { 00:17:37.922 "name": "pt1", 00:17:37.923 "uuid": "6cd64223-6aab-522a-8fb5-7bf2d15eec0f", 00:17:37.923 "is_configured": true, 00:17:37.923 "data_offset": 2048, 00:17:37.923 "data_size": 63488 00:17:37.923 }, 00:17:37.923 { 00:17:37.923 "name": "pt2", 00:17:37.923 "uuid": "d9001971-2e05-5af6-9baa-34752c02895e", 00:17:37.923 "is_configured": true, 00:17:37.923 "data_offset": 2048, 00:17:37.923 "data_size": 63488 00:17:37.923 }, 00:17:37.923 { 00:17:37.923 "name": "pt3", 00:17:37.923 "uuid": "0c5f9a00-f825-557c-8f18-ca05a4f89a21", 00:17:37.923 "is_configured": true, 00:17:37.923 "data_offset": 2048, 00:17:37.923 "data_size": 63488 00:17:37.923 } 00:17:37.923 ] 00:17:37.923 }' 00:17:37.923 13:41:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.923 13:41:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.490 13:41:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.490 13:41:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:38.750 [2024-07-10 13:41:17.921672] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.750 13:41:17 -- bdev/bdev_raid.sh@430 -- # '[' 17b3aacb-bd61-4e47-9333-e614e0651034 '!=' 17b3aacb-bd61-4e47-9333-e614e0651034 ']' 00:17:38.750 13:41:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:38.750 13:41:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:38.750 13:41:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:38.750 13:41:17 -- bdev/bdev_raid.sh@511 -- # killprocess 119915 00:17:38.750 13:41:17 -- common/autotest_common.sh@926 -- # '[' -z 119915 ']' 00:17:38.750 13:41:17 -- common/autotest_common.sh@930 -- # kill -0 119915 00:17:38.750 13:41:17 -- common/autotest_common.sh@931 -- # uname 00:17:38.750 13:41:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.750 13:41:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119915 00:17:38.750 killing process with pid 119915 00:17:38.750 13:41:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.750 13:41:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.750 13:41:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119915' 00:17:38.750 13:41:17 -- common/autotest_common.sh@945 -- # kill 119915 00:17:38.750 13:41:17 -- common/autotest_common.sh@950 -- # wait 119915 00:17:38.750 [2024-07-10 13:41:17.968047] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.750 [2024-07-10 13:41:17.968141] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.750 [2024-07-10 13:41:17.968196] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.750 [2024-07-10 13:41:17.968249] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:39.010 [2024-07-10 13:41:18.261754] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.391 ************************************ 00:17:40.391 END TEST raid_superblock_test 00:17:40.391 ************************************ 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:40.391 00:17:40.391 real 0m10.062s 00:17:40.391 user 0m17.061s 00:17:40.391 sys 0m1.324s 00:17:40.391 13:41:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.391 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:40.391 13:41:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:40.391 13:41:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.391 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.391 ************************************ 00:17:40.391 START TEST raid_state_function_test 00:17:40.391 ************************************ 00:17:40.391 13:41:19 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=120240 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120240' 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:40.391 Process raid pid: 120240 00:17:40.391 13:41:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120240 /var/tmp/spdk-raid.sock 00:17:40.391 13:41:19 -- common/autotest_common.sh@819 -- # '[' -z 120240 ']' 00:17:40.391 13:41:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.391 13:41:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.391 13:41:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.391 13:41:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.391 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.391 [2024-07-10 13:41:19.675869] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:40.391 [2024-07-10 13:41:19.676048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.652 [2024-07-10 13:41:19.815194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.910 [2024-07-10 13:41:20.009007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.910 [2024-07-10 13:41:20.213322] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.170 13:41:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.170 13:41:20 -- common/autotest_common.sh@852 -- # return 0 00:17:41.170 13:41:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:41.430 [2024-07-10 13:41:20.652338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.430 [2024-07-10 13:41:20.652469] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.430 [2024-07-10 13:41:20.652502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.430 [2024-07-10 13:41:20.652546] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.430 [2024-07-10 13:41:20.652568] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.430 [2024-07-10 13:41:20.652612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.430 13:41:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.689 13:41:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.689 "name": "Existed_Raid", 00:17:41.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.689 "strip_size_kb": 0, 00:17:41.689 "state": "configuring", 00:17:41.689 "raid_level": "raid1", 00:17:41.689 "superblock": false, 00:17:41.689 "num_base_bdevs": 3, 00:17:41.689 "num_base_bdevs_discovered": 0, 00:17:41.689 "num_base_bdevs_operational": 3, 00:17:41.689 "base_bdevs_list": [ 00:17:41.689 { 00:17:41.689 "name": "BaseBdev1", 00:17:41.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.689 "is_configured": false, 00:17:41.689 "data_offset": 0, 00:17:41.689 "data_size": 0 00:17:41.689 }, 00:17:41.689 { 00:17:41.689 "name": "BaseBdev2", 00:17:41.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.689 "is_configured": false, 00:17:41.689 "data_offset": 0, 00:17:41.689 "data_size": 0 00:17:41.689 }, 00:17:41.689 { 00:17:41.689 "name": "BaseBdev3", 00:17:41.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.689 "is_configured": false, 00:17:41.689 "data_offset": 0, 00:17:41.689 "data_size": 0 00:17:41.689 } 00:17:41.689 ] 00:17:41.689 }' 00:17:41.689 13:41:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.689 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:17:42.258 13:41:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.258 [2024-07-10 13:41:21.610521] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.258 [2024-07-10 13:41:21.610610] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:42.517 13:41:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:42.517 [2024-07-10 13:41:21.798205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.517 [2024-07-10 13:41:21.798284] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.517 [2024-07-10 13:41:21.798307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.517 [2024-07-10 13:41:21.798328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.517 [2024-07-10 13:41:21.798342] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.517 [2024-07-10 13:41:21.798373] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.517 13:41:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.776 [2024-07-10 13:41:22.013808] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.776 BaseBdev1 00:17:42.776 13:41:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:42.776 13:41:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:42.776 13:41:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:42.776 13:41:22 -- common/autotest_common.sh@889 -- # local i 00:17:42.776 13:41:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:42.776 13:41:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:42.776 13:41:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.035 13:41:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.302 [ 00:17:43.302 { 00:17:43.302 "name": "BaseBdev1", 00:17:43.302 "aliases": [ 00:17:43.302 "637c4245-da31-49ae-b801-e1bb55810ae9" 00:17:43.302 ], 00:17:43.302 "product_name": "Malloc disk", 00:17:43.302 "block_size": 512, 00:17:43.302 "num_blocks": 65536, 00:17:43.302 "uuid": "637c4245-da31-49ae-b801-e1bb55810ae9", 00:17:43.302 "assigned_rate_limits": { 00:17:43.302 "rw_ios_per_sec": 0, 00:17:43.302 "rw_mbytes_per_sec": 0, 00:17:43.302 "r_mbytes_per_sec": 0, 00:17:43.302 "w_mbytes_per_sec": 0 00:17:43.302 }, 00:17:43.302 "claimed": true, 00:17:43.302 "claim_type": "exclusive_write", 00:17:43.302 "zoned": false, 00:17:43.302 "supported_io_types": { 00:17:43.302 "read": true, 00:17:43.302 "write": true, 00:17:43.302 "unmap": true, 00:17:43.302 "write_zeroes": true, 00:17:43.302 "flush": true, 00:17:43.302 "reset": true, 00:17:43.302 "compare": false, 00:17:43.302 "compare_and_write": false, 00:17:43.302 "abort": true, 00:17:43.302 "nvme_admin": false, 00:17:43.302 "nvme_io": false 00:17:43.302 }, 00:17:43.302 "memory_domains": [ 00:17:43.302 { 00:17:43.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.302 "dma_device_type": 2 00:17:43.302 } 00:17:43.302 ], 00:17:43.302 "driver_specific": {} 00:17:43.302 } 00:17:43.302 ] 00:17:43.302 13:41:22 -- common/autotest_common.sh@895 -- # return 0 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:43.302 13:41:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.303 "name": "Existed_Raid", 00:17:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.303 "strip_size_kb": 0, 00:17:43.303 "state": "configuring", 00:17:43.303 "raid_level": "raid1", 00:17:43.303 "superblock": false, 00:17:43.303 "num_base_bdevs": 3, 00:17:43.303 "num_base_bdevs_discovered": 1, 00:17:43.303 "num_base_bdevs_operational": 3, 00:17:43.303 "base_bdevs_list": [ 00:17:43.303 { 00:17:43.303 "name": "BaseBdev1", 00:17:43.303 "uuid": "637c4245-da31-49ae-b801-e1bb55810ae9", 00:17:43.303 "is_configured": true, 00:17:43.303 "data_offset": 0, 00:17:43.303 "data_size": 65536 00:17:43.303 }, 00:17:43.303 { 00:17:43.303 "name": "BaseBdev2", 00:17:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.303 "is_configured": false, 00:17:43.303 "data_offset": 0, 00:17:43.303 "data_size": 0 00:17:43.303 }, 00:17:43.303 { 00:17:43.303 "name": "BaseBdev3", 00:17:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.303 "is_configured": false, 00:17:43.303 "data_offset": 0, 00:17:43.303 "data_size": 0 00:17:43.303 } 00:17:43.303 ] 00:17:43.303 }' 00:17:43.303 13:41:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.303 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:17:43.881 13:41:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.140 [2024-07-10 13:41:23.375485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.140 [2024-07-10 13:41:23.375577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:44.140 13:41:23 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:44.140 13:41:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:44.400 [2024-07-10 13:41:23.559203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.400 [2024-07-10 13:41:23.560876] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.401 [2024-07-10 13:41:23.560960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.401 [2024-07-10 13:41:23.560983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.401 [2024-07-10 13:41:23.561012] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.401 13:41:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.660 13:41:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.660 "name": "Existed_Raid", 00:17:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.660 "strip_size_kb": 0, 00:17:44.660 "state": "configuring", 00:17:44.660 "raid_level": "raid1", 00:17:44.660 "superblock": false, 00:17:44.660 "num_base_bdevs": 3, 00:17:44.660 "num_base_bdevs_discovered": 1, 00:17:44.660 "num_base_bdevs_operational": 3, 00:17:44.660 "base_bdevs_list": [ 00:17:44.660 { 00:17:44.660 "name": "BaseBdev1", 00:17:44.660 "uuid": "637c4245-da31-49ae-b801-e1bb55810ae9", 00:17:44.660 "is_configured": true, 00:17:44.660 "data_offset": 0, 00:17:44.660 "data_size": 65536 00:17:44.660 }, 00:17:44.660 { 00:17:44.660 "name": "BaseBdev2", 00:17:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.660 "is_configured": false, 00:17:44.660 "data_offset": 0, 00:17:44.660 "data_size": 0 00:17:44.660 }, 00:17:44.660 { 00:17:44.660 "name": "BaseBdev3", 00:17:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.660 "is_configured": false, 00:17:44.660 "data_offset": 0, 00:17:44.660 "data_size": 0 00:17:44.660 } 00:17:44.660 ] 00:17:44.660 }' 00:17:44.660 13:41:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.660 13:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:45.229 13:41:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.489 [2024-07-10 13:41:24.607460] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.489 BaseBdev2 00:17:45.489 13:41:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:45.489 13:41:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:45.489 13:41:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.489 13:41:24 -- common/autotest_common.sh@889 -- # local i 00:17:45.489 13:41:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.489 13:41:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.489 13:41:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.489 13:41:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:45.748 [ 00:17:45.748 { 00:17:45.748 "name": "BaseBdev2", 00:17:45.748 "aliases": [ 00:17:45.748 "63feec99-c196-4a9a-b5b7-5e996f41020f" 00:17:45.748 ], 00:17:45.748 "product_name": "Malloc disk", 00:17:45.748 "block_size": 512, 00:17:45.748 "num_blocks": 65536, 00:17:45.748 "uuid": "63feec99-c196-4a9a-b5b7-5e996f41020f", 00:17:45.748 "assigned_rate_limits": { 00:17:45.748 "rw_ios_per_sec": 0, 00:17:45.748 "rw_mbytes_per_sec": 0, 00:17:45.748 "r_mbytes_per_sec": 0, 00:17:45.748 "w_mbytes_per_sec": 0 00:17:45.748 }, 00:17:45.748 "claimed": true, 00:17:45.748 "claim_type": "exclusive_write", 00:17:45.748 "zoned": false, 00:17:45.748 "supported_io_types": { 00:17:45.748 "read": true, 00:17:45.748 "write": true, 00:17:45.748 "unmap": true, 00:17:45.748 "write_zeroes": true, 00:17:45.748 "flush": true, 00:17:45.748 "reset": true, 00:17:45.748 "compare": false, 00:17:45.748 "compare_and_write": false, 00:17:45.748 "abort": true, 00:17:45.748 "nvme_admin": false, 00:17:45.748 "nvme_io": false 00:17:45.748 }, 00:17:45.748 "memory_domains": [ 00:17:45.748 { 00:17:45.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.748 "dma_device_type": 2 00:17:45.748 } 00:17:45.748 ], 00:17:45.748 "driver_specific": {} 00:17:45.748 } 00:17:45.748 ] 00:17:45.748 13:41:25 -- common/autotest_common.sh@895 -- # return 0 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.748 13:41:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.008 13:41:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.008 "name": "Existed_Raid", 00:17:46.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.008 "strip_size_kb": 0, 00:17:46.008 "state": "configuring", 00:17:46.008 "raid_level": "raid1", 00:17:46.008 "superblock": false, 00:17:46.008 "num_base_bdevs": 3, 00:17:46.008 "num_base_bdevs_discovered": 2, 00:17:46.008 "num_base_bdevs_operational": 3, 00:17:46.008 "base_bdevs_list": [ 00:17:46.008 { 00:17:46.008 "name": "BaseBdev1", 00:17:46.008 "uuid": "637c4245-da31-49ae-b801-e1bb55810ae9", 00:17:46.008 "is_configured": true, 00:17:46.008 "data_offset": 0, 00:17:46.008 "data_size": 65536 00:17:46.008 }, 00:17:46.008 { 00:17:46.008 "name": "BaseBdev2", 00:17:46.008 "uuid": "63feec99-c196-4a9a-b5b7-5e996f41020f", 00:17:46.008 "is_configured": true, 00:17:46.008 "data_offset": 0, 00:17:46.008 "data_size": 65536 00:17:46.008 }, 00:17:46.008 { 00:17:46.008 "name": "BaseBdev3", 00:17:46.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.008 "is_configured": false, 00:17:46.008 "data_offset": 0, 00:17:46.008 "data_size": 0 00:17:46.008 } 00:17:46.008 ] 00:17:46.008 }' 00:17:46.008 13:41:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.008 13:41:25 -- common/autotest_common.sh@10 -- # set +x 00:17:46.578 13:41:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:46.838 [2024-07-10 13:41:25.981574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.838 [2024-07-10 13:41:25.981699] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:46.838 [2024-07-10 13:41:25.981722] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:46.838 [2024-07-10 13:41:25.981844] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:46.838 [2024-07-10 13:41:25.982144] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:46.838 [2024-07-10 13:41:25.982182] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:46.838 [2024-07-10 13:41:25.982432] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.838 BaseBdev3 00:17:46.838 13:41:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:46.838 13:41:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:46.838 13:41:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:46.838 13:41:25 -- common/autotest_common.sh@889 -- # local i 00:17:46.838 13:41:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:46.838 13:41:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:46.838 13:41:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.838 13:41:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.098 [ 00:17:47.098 { 00:17:47.098 "name": "BaseBdev3", 00:17:47.098 "aliases": [ 00:17:47.098 "caa0f36f-37b1-4d02-8ebe-040ff9a20b6b" 00:17:47.098 ], 00:17:47.098 "product_name": "Malloc disk", 00:17:47.098 "block_size": 512, 00:17:47.098 "num_blocks": 65536, 00:17:47.098 "uuid": "caa0f36f-37b1-4d02-8ebe-040ff9a20b6b", 00:17:47.098 "assigned_rate_limits": { 00:17:47.098 "rw_ios_per_sec": 0, 00:17:47.098 "rw_mbytes_per_sec": 0, 00:17:47.098 "r_mbytes_per_sec": 0, 00:17:47.098 "w_mbytes_per_sec": 0 00:17:47.098 }, 00:17:47.098 "claimed": true, 00:17:47.098 "claim_type": "exclusive_write", 00:17:47.098 "zoned": false, 00:17:47.098 "supported_io_types": { 00:17:47.098 "read": true, 00:17:47.098 "write": true, 00:17:47.098 "unmap": true, 00:17:47.098 "write_zeroes": true, 00:17:47.098 "flush": true, 00:17:47.098 "reset": true, 00:17:47.098 "compare": false, 00:17:47.098 "compare_and_write": false, 00:17:47.098 "abort": true, 00:17:47.098 "nvme_admin": false, 00:17:47.098 "nvme_io": false 00:17:47.098 }, 00:17:47.098 "memory_domains": [ 00:17:47.098 { 00:17:47.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.098 "dma_device_type": 2 00:17:47.098 } 00:17:47.098 ], 00:17:47.098 "driver_specific": {} 00:17:47.098 } 00:17:47.098 ] 00:17:47.098 13:41:26 -- common/autotest_common.sh@895 -- # return 0 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.098 13:41:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.358 13:41:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.358 "name": "Existed_Raid", 00:17:47.358 "uuid": "3122ddc2-30b0-440c-bbce-d9ccba32290b", 00:17:47.358 "strip_size_kb": 0, 00:17:47.358 "state": "online", 00:17:47.358 "raid_level": "raid1", 00:17:47.358 "superblock": false, 00:17:47.358 "num_base_bdevs": 3, 00:17:47.358 "num_base_bdevs_discovered": 3, 00:17:47.358 "num_base_bdevs_operational": 3, 00:17:47.358 "base_bdevs_list": [ 00:17:47.358 { 00:17:47.358 "name": "BaseBdev1", 00:17:47.358 "uuid": "637c4245-da31-49ae-b801-e1bb55810ae9", 00:17:47.358 "is_configured": true, 00:17:47.358 "data_offset": 0, 00:17:47.358 "data_size": 65536 00:17:47.358 }, 00:17:47.358 { 00:17:47.358 "name": "BaseBdev2", 00:17:47.358 "uuid": "63feec99-c196-4a9a-b5b7-5e996f41020f", 00:17:47.358 "is_configured": true, 00:17:47.358 "data_offset": 0, 00:17:47.358 "data_size": 65536 00:17:47.358 }, 00:17:47.358 { 00:17:47.358 "name": "BaseBdev3", 00:17:47.358 "uuid": "caa0f36f-37b1-4d02-8ebe-040ff9a20b6b", 00:17:47.358 "is_configured": true, 00:17:47.358 "data_offset": 0, 00:17:47.358 "data_size": 65536 00:17:47.358 } 00:17:47.358 ] 00:17:47.358 }' 00:17:47.358 13:41:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.358 13:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:48.055 13:41:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.055 [2024-07-10 13:41:27.335451] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.322 "name": "Existed_Raid", 00:17:48.322 "uuid": "3122ddc2-30b0-440c-bbce-d9ccba32290b", 00:17:48.322 "strip_size_kb": 0, 00:17:48.322 "state": "online", 00:17:48.322 "raid_level": "raid1", 00:17:48.322 "superblock": false, 00:17:48.322 "num_base_bdevs": 3, 00:17:48.322 "num_base_bdevs_discovered": 2, 00:17:48.322 "num_base_bdevs_operational": 2, 00:17:48.322 "base_bdevs_list": [ 00:17:48.322 { 00:17:48.322 "name": null, 00:17:48.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.322 "is_configured": false, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 }, 00:17:48.322 { 00:17:48.322 "name": "BaseBdev2", 00:17:48.322 "uuid": "63feec99-c196-4a9a-b5b7-5e996f41020f", 00:17:48.322 "is_configured": true, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 }, 00:17:48.322 { 00:17:48.322 "name": "BaseBdev3", 00:17:48.322 "uuid": "caa0f36f-37b1-4d02-8ebe-040ff9a20b6b", 00:17:48.322 "is_configured": true, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 } 00:17:48.322 ] 00:17:48.322 }' 00:17:48.322 13:41:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.322 13:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:48.892 13:41:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:48.892 13:41:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.892 13:41:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:48.892 13:41:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.151 13:41:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.151 13:41:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.151 13:41:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:49.411 [2024-07-10 13:41:28.602804] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.411 13:41:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:49.411 13:41:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.411 13:41:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.411 13:41:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.671 13:41:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.671 13:41:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.671 13:41:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:49.930 [2024-07-10 13:41:29.079542] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.930 [2024-07-10 13:41:29.079639] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.930 [2024-07-10 13:41:29.079721] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.930 [2024-07-10 13:41:29.184299] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.930 [2024-07-10 13:41:29.184419] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:49.930 13:41:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:49.930 13:41:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.930 13:41:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.930 13:41:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.191 13:41:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:50.191 13:41:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:50.191 13:41:29 -- bdev/bdev_raid.sh@287 -- # killprocess 120240 00:17:50.191 13:41:29 -- common/autotest_common.sh@926 -- # '[' -z 120240 ']' 00:17:50.191 13:41:29 -- common/autotest_common.sh@930 -- # kill -0 120240 00:17:50.191 13:41:29 -- common/autotest_common.sh@931 -- # uname 00:17:50.191 13:41:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.191 13:41:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120240 00:17:50.191 killing process with pid 120240 00:17:50.191 13:41:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.191 13:41:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.191 13:41:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120240' 00:17:50.191 13:41:29 -- common/autotest_common.sh@945 -- # kill 120240 00:17:50.191 13:41:29 -- common/autotest_common.sh@950 -- # wait 120240 00:17:50.191 [2024-07-10 13:41:29.400913] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.191 [2024-07-10 13:41:29.401084] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.572 ************************************ 00:17:51.572 END TEST raid_state_function_test 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:51.572 00:17:51.572 real 0m11.205s 00:17:51.572 user 0m19.224s 00:17:51.572 sys 0m1.335s 00:17:51.572 13:41:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.572 13:41:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.572 ************************************ 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:51.572 13:41:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:51.572 13:41:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.572 13:41:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.572 ************************************ 00:17:51.572 START TEST raid_state_function_test_sb 00:17:51.572 ************************************ 00:17:51.572 13:41:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=120624 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120624' 00:17:51.572 Process raid pid: 120624 00:17:51.572 13:41:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120624 /var/tmp/spdk-raid.sock 00:17:51.572 13:41:30 -- common/autotest_common.sh@819 -- # '[' -z 120624 ']' 00:17:51.572 13:41:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.572 13:41:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.572 13:41:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.572 13:41:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.572 13:41:30 -- common/autotest_common.sh@10 -- # set +x 00:17:51.831 [2024-07-10 13:41:30.946523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:51.831 [2024-07-10 13:41:30.946754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.831 [2024-07-10 13:41:31.102279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.091 [2024-07-10 13:41:31.314392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.351 [2024-07-10 13:41:31.516625] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.610 13:41:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.610 13:41:31 -- common/autotest_common.sh@852 -- # return 0 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:52.610 [2024-07-10 13:41:31.934906] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.610 [2024-07-10 13:41:31.935091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.610 [2024-07-10 13:41:31.935124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.610 [2024-07-10 13:41:31.935153] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.610 [2024-07-10 13:41:31.935168] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.610 [2024-07-10 13:41:31.935218] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.610 13:41:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.870 13:41:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.870 "name": "Existed_Raid", 00:17:52.870 "uuid": "d15cb80e-280b-47c8-a80e-39229a8ad67f", 00:17:52.870 "strip_size_kb": 0, 00:17:52.870 "state": "configuring", 00:17:52.870 "raid_level": "raid1", 00:17:52.870 "superblock": true, 00:17:52.870 "num_base_bdevs": 3, 00:17:52.870 "num_base_bdevs_discovered": 0, 00:17:52.870 "num_base_bdevs_operational": 3, 00:17:52.870 "base_bdevs_list": [ 00:17:52.870 { 00:17:52.870 "name": "BaseBdev1", 00:17:52.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.870 "is_configured": false, 00:17:52.870 "data_offset": 0, 00:17:52.870 "data_size": 0 00:17:52.870 }, 00:17:52.870 { 00:17:52.870 "name": "BaseBdev2", 00:17:52.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.870 "is_configured": false, 00:17:52.870 "data_offset": 0, 00:17:52.870 "data_size": 0 00:17:52.870 }, 00:17:52.870 { 00:17:52.870 "name": "BaseBdev3", 00:17:52.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.870 "is_configured": false, 00:17:52.870 "data_offset": 0, 00:17:52.870 "data_size": 0 00:17:52.870 } 00:17:52.870 ] 00:17:52.870 }' 00:17:52.870 13:41:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.870 13:41:32 -- common/autotest_common.sh@10 -- # set +x 00:17:53.440 13:41:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.700 [2024-07-10 13:41:32.908962] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.700 [2024-07-10 13:41:32.909124] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:53.700 13:41:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:53.959 [2024-07-10 13:41:33.100742] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.959 [2024-07-10 13:41:33.100877] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.959 [2024-07-10 13:41:33.100903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.959 [2024-07-10 13:41:33.100928] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.959 [2024-07-10 13:41:33.100941] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.959 [2024-07-10 13:41:33.100980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.959 13:41:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.219 [2024-07-10 13:41:33.337591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.219 BaseBdev1 00:17:54.219 13:41:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:54.219 13:41:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:54.219 13:41:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:54.219 13:41:33 -- common/autotest_common.sh@889 -- # local i 00:17:54.219 13:41:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:54.219 13:41:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:54.219 13:41:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.219 13:41:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.479 [ 00:17:54.479 { 00:17:54.479 "name": "BaseBdev1", 00:17:54.479 "aliases": [ 00:17:54.479 "c36522a0-e8d5-4f0d-8259-81e19ca68940" 00:17:54.479 ], 00:17:54.479 "product_name": "Malloc disk", 00:17:54.479 "block_size": 512, 00:17:54.479 "num_blocks": 65536, 00:17:54.479 "uuid": "c36522a0-e8d5-4f0d-8259-81e19ca68940", 00:17:54.479 "assigned_rate_limits": { 00:17:54.479 "rw_ios_per_sec": 0, 00:17:54.479 "rw_mbytes_per_sec": 0, 00:17:54.479 "r_mbytes_per_sec": 0, 00:17:54.479 "w_mbytes_per_sec": 0 00:17:54.479 }, 00:17:54.479 "claimed": true, 00:17:54.479 "claim_type": "exclusive_write", 00:17:54.479 "zoned": false, 00:17:54.479 "supported_io_types": { 00:17:54.479 "read": true, 00:17:54.479 "write": true, 00:17:54.479 "unmap": true, 00:17:54.479 "write_zeroes": true, 00:17:54.479 "flush": true, 00:17:54.479 "reset": true, 00:17:54.479 "compare": false, 00:17:54.479 "compare_and_write": false, 00:17:54.479 "abort": true, 00:17:54.479 "nvme_admin": false, 00:17:54.479 "nvme_io": false 00:17:54.479 }, 00:17:54.479 "memory_domains": [ 00:17:54.479 { 00:17:54.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.479 "dma_device_type": 2 00:17:54.479 } 00:17:54.479 ], 00:17:54.479 "driver_specific": {} 00:17:54.479 } 00:17:54.479 ] 00:17:54.479 13:41:33 -- common/autotest_common.sh@895 -- # return 0 00:17:54.479 13:41:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.480 13:41:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.739 13:41:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.740 "name": "Existed_Raid", 00:17:54.740 "uuid": "933baad5-7d0f-41f2-83bd-2e811e69bff3", 00:17:54.740 "strip_size_kb": 0, 00:17:54.740 "state": "configuring", 00:17:54.740 "raid_level": "raid1", 00:17:54.740 "superblock": true, 00:17:54.740 "num_base_bdevs": 3, 00:17:54.740 "num_base_bdevs_discovered": 1, 00:17:54.740 "num_base_bdevs_operational": 3, 00:17:54.740 "base_bdevs_list": [ 00:17:54.740 { 00:17:54.740 "name": "BaseBdev1", 00:17:54.740 "uuid": "c36522a0-e8d5-4f0d-8259-81e19ca68940", 00:17:54.740 "is_configured": true, 00:17:54.740 "data_offset": 2048, 00:17:54.740 "data_size": 63488 00:17:54.740 }, 00:17:54.740 { 00:17:54.740 "name": "BaseBdev2", 00:17:54.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.740 "is_configured": false, 00:17:54.740 "data_offset": 0, 00:17:54.740 "data_size": 0 00:17:54.740 }, 00:17:54.740 { 00:17:54.740 "name": "BaseBdev3", 00:17:54.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.740 "is_configured": false, 00:17:54.740 "data_offset": 0, 00:17:54.740 "data_size": 0 00:17:54.740 } 00:17:54.740 ] 00:17:54.740 }' 00:17:54.740 13:41:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.740 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:17:55.309 13:41:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.569 [2024-07-10 13:41:34.663382] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.569 [2024-07-10 13:41:34.663534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:55.569 13:41:34 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:55.569 13:41:34 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.828 13:41:34 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.828 BaseBdev1 00:17:55.828 13:41:35 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:55.828 13:41:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:55.828 13:41:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:55.828 13:41:35 -- common/autotest_common.sh@889 -- # local i 00:17:55.828 13:41:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:55.828 13:41:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:55.828 13:41:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.086 13:41:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.347 [ 00:17:56.347 { 00:17:56.347 "name": "BaseBdev1", 00:17:56.347 "aliases": [ 00:17:56.347 "b17a1a0d-7430-41e6-b411-94ffbf9165a8" 00:17:56.347 ], 00:17:56.347 "product_name": "Malloc disk", 00:17:56.347 "block_size": 512, 00:17:56.347 "num_blocks": 65536, 00:17:56.347 "uuid": "b17a1a0d-7430-41e6-b411-94ffbf9165a8", 00:17:56.347 "assigned_rate_limits": { 00:17:56.347 "rw_ios_per_sec": 0, 00:17:56.347 "rw_mbytes_per_sec": 0, 00:17:56.347 "r_mbytes_per_sec": 0, 00:17:56.347 "w_mbytes_per_sec": 0 00:17:56.347 }, 00:17:56.347 "claimed": false, 00:17:56.347 "zoned": false, 00:17:56.347 "supported_io_types": { 00:17:56.347 "read": true, 00:17:56.347 "write": true, 00:17:56.347 "unmap": true, 00:17:56.348 "write_zeroes": true, 00:17:56.348 "flush": true, 00:17:56.348 "reset": true, 00:17:56.348 "compare": false, 00:17:56.348 "compare_and_write": false, 00:17:56.348 "abort": true, 00:17:56.348 "nvme_admin": false, 00:17:56.348 "nvme_io": false 00:17:56.348 }, 00:17:56.348 "memory_domains": [ 00:17:56.348 { 00:17:56.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.348 "dma_device_type": 2 00:17:56.348 } 00:17:56.348 ], 00:17:56.348 "driver_specific": {} 00:17:56.348 } 00:17:56.348 ] 00:17:56.348 13:41:35 -- common/autotest_common.sh@895 -- # return 0 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:56.348 [2024-07-10 13:41:35.679061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.348 [2024-07-10 13:41:35.681170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.348 [2024-07-10 13:41:35.681269] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.348 [2024-07-10 13:41:35.681295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.348 [2024-07-10 13:41:35.681330] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.348 13:41:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.622 13:41:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.622 "name": "Existed_Raid", 00:17:56.622 "uuid": "230f03ba-bd6c-4c8b-b42e-a3d2deddb8c1", 00:17:56.622 "strip_size_kb": 0, 00:17:56.622 "state": "configuring", 00:17:56.622 "raid_level": "raid1", 00:17:56.622 "superblock": true, 00:17:56.622 "num_base_bdevs": 3, 00:17:56.622 "num_base_bdevs_discovered": 1, 00:17:56.622 "num_base_bdevs_operational": 3, 00:17:56.622 "base_bdevs_list": [ 00:17:56.622 { 00:17:56.622 "name": "BaseBdev1", 00:17:56.622 "uuid": "b17a1a0d-7430-41e6-b411-94ffbf9165a8", 00:17:56.622 "is_configured": true, 00:17:56.622 "data_offset": 2048, 00:17:56.622 "data_size": 63488 00:17:56.622 }, 00:17:56.622 { 00:17:56.622 "name": "BaseBdev2", 00:17:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.622 "is_configured": false, 00:17:56.622 "data_offset": 0, 00:17:56.622 "data_size": 0 00:17:56.622 }, 00:17:56.622 { 00:17:56.622 "name": "BaseBdev3", 00:17:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.622 "is_configured": false, 00:17:56.622 "data_offset": 0, 00:17:56.622 "data_size": 0 00:17:56.622 } 00:17:56.622 ] 00:17:56.622 }' 00:17:56.622 13:41:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.622 13:41:35 -- common/autotest_common.sh@10 -- # set +x 00:17:57.198 13:41:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:57.457 [2024-07-10 13:41:36.672142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.457 BaseBdev2 00:17:57.457 13:41:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:57.457 13:41:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:57.457 13:41:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.457 13:41:36 -- common/autotest_common.sh@889 -- # local i 00:17:57.457 13:41:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.457 13:41:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.457 13:41:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.716 13:41:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.716 [ 00:17:57.716 { 00:17:57.716 "name": "BaseBdev2", 00:17:57.716 "aliases": [ 00:17:57.716 "71f3b427-53f7-4808-a066-58dd875d4010" 00:17:57.716 ], 00:17:57.716 "product_name": "Malloc disk", 00:17:57.716 "block_size": 512, 00:17:57.716 "num_blocks": 65536, 00:17:57.716 "uuid": "71f3b427-53f7-4808-a066-58dd875d4010", 00:17:57.716 "assigned_rate_limits": { 00:17:57.716 "rw_ios_per_sec": 0, 00:17:57.716 "rw_mbytes_per_sec": 0, 00:17:57.716 "r_mbytes_per_sec": 0, 00:17:57.716 "w_mbytes_per_sec": 0 00:17:57.716 }, 00:17:57.716 "claimed": true, 00:17:57.716 "claim_type": "exclusive_write", 00:17:57.716 "zoned": false, 00:17:57.716 "supported_io_types": { 00:17:57.716 "read": true, 00:17:57.716 "write": true, 00:17:57.716 "unmap": true, 00:17:57.716 "write_zeroes": true, 00:17:57.716 "flush": true, 00:17:57.716 "reset": true, 00:17:57.716 "compare": false, 00:17:57.716 "compare_and_write": false, 00:17:57.716 "abort": true, 00:17:57.716 "nvme_admin": false, 00:17:57.716 "nvme_io": false 00:17:57.716 }, 00:17:57.716 "memory_domains": [ 00:17:57.716 { 00:17:57.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.716 "dma_device_type": 2 00:17:57.716 } 00:17:57.716 ], 00:17:57.716 "driver_specific": {} 00:17:57.716 } 00:17:57.716 ] 00:17:57.716 13:41:37 -- common/autotest_common.sh@895 -- # return 0 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.716 13:41:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.976 13:41:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.976 "name": "Existed_Raid", 00:17:57.976 "uuid": "230f03ba-bd6c-4c8b-b42e-a3d2deddb8c1", 00:17:57.976 "strip_size_kb": 0, 00:17:57.976 "state": "configuring", 00:17:57.976 "raid_level": "raid1", 00:17:57.976 "superblock": true, 00:17:57.976 "num_base_bdevs": 3, 00:17:57.976 "num_base_bdevs_discovered": 2, 00:17:57.976 "num_base_bdevs_operational": 3, 00:17:57.976 "base_bdevs_list": [ 00:17:57.976 { 00:17:57.976 "name": "BaseBdev1", 00:17:57.976 "uuid": "b17a1a0d-7430-41e6-b411-94ffbf9165a8", 00:17:57.976 "is_configured": true, 00:17:57.976 "data_offset": 2048, 00:17:57.976 "data_size": 63488 00:17:57.976 }, 00:17:57.976 { 00:17:57.976 "name": "BaseBdev2", 00:17:57.976 "uuid": "71f3b427-53f7-4808-a066-58dd875d4010", 00:17:57.976 "is_configured": true, 00:17:57.976 "data_offset": 2048, 00:17:57.976 "data_size": 63488 00:17:57.976 }, 00:17:57.976 { 00:17:57.976 "name": "BaseBdev3", 00:17:57.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.976 "is_configured": false, 00:17:57.976 "data_offset": 0, 00:17:57.976 "data_size": 0 00:17:57.976 } 00:17:57.976 ] 00:17:57.976 }' 00:17:57.976 13:41:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.976 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:17:58.544 13:41:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:58.803 [2024-07-10 13:41:38.012561] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.803 [2024-07-10 13:41:38.012872] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:58.803 [2024-07-10 13:41:38.012907] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:58.803 [2024-07-10 13:41:38.013065] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:58.803 BaseBdev3 00:17:58.803 [2024-07-10 13:41:38.013406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:58.803 [2024-07-10 13:41:38.013451] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:58.803 [2024-07-10 13:41:38.013629] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.803 13:41:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:58.803 13:41:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:58.803 13:41:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:58.803 13:41:38 -- common/autotest_common.sh@889 -- # local i 00:17:58.803 13:41:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:58.803 13:41:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:58.803 13:41:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.061 13:41:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:59.061 [ 00:17:59.061 { 00:17:59.061 "name": "BaseBdev3", 00:17:59.061 "aliases": [ 00:17:59.061 "42860019-b251-4e1e-b229-2935dc83823f" 00:17:59.061 ], 00:17:59.061 "product_name": "Malloc disk", 00:17:59.061 "block_size": 512, 00:17:59.061 "num_blocks": 65536, 00:17:59.062 "uuid": "42860019-b251-4e1e-b229-2935dc83823f", 00:17:59.062 "assigned_rate_limits": { 00:17:59.062 "rw_ios_per_sec": 0, 00:17:59.062 "rw_mbytes_per_sec": 0, 00:17:59.062 "r_mbytes_per_sec": 0, 00:17:59.062 "w_mbytes_per_sec": 0 00:17:59.062 }, 00:17:59.062 "claimed": true, 00:17:59.062 "claim_type": "exclusive_write", 00:17:59.062 "zoned": false, 00:17:59.062 "supported_io_types": { 00:17:59.062 "read": true, 00:17:59.062 "write": true, 00:17:59.062 "unmap": true, 00:17:59.062 "write_zeroes": true, 00:17:59.062 "flush": true, 00:17:59.062 "reset": true, 00:17:59.062 "compare": false, 00:17:59.062 "compare_and_write": false, 00:17:59.062 "abort": true, 00:17:59.062 "nvme_admin": false, 00:17:59.062 "nvme_io": false 00:17:59.062 }, 00:17:59.062 "memory_domains": [ 00:17:59.062 { 00:17:59.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.062 "dma_device_type": 2 00:17:59.062 } 00:17:59.062 ], 00:17:59.062 "driver_specific": {} 00:17:59.062 } 00:17:59.062 ] 00:17:59.062 13:41:38 -- common/autotest_common.sh@895 -- # return 0 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.062 13:41:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.319 13:41:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.319 "name": "Existed_Raid", 00:17:59.319 "uuid": "230f03ba-bd6c-4c8b-b42e-a3d2deddb8c1", 00:17:59.319 "strip_size_kb": 0, 00:17:59.319 "state": "online", 00:17:59.319 "raid_level": "raid1", 00:17:59.319 "superblock": true, 00:17:59.319 "num_base_bdevs": 3, 00:17:59.319 "num_base_bdevs_discovered": 3, 00:17:59.319 "num_base_bdevs_operational": 3, 00:17:59.319 "base_bdevs_list": [ 00:17:59.319 { 00:17:59.319 "name": "BaseBdev1", 00:17:59.319 "uuid": "b17a1a0d-7430-41e6-b411-94ffbf9165a8", 00:17:59.319 "is_configured": true, 00:17:59.319 "data_offset": 2048, 00:17:59.319 "data_size": 63488 00:17:59.319 }, 00:17:59.319 { 00:17:59.319 "name": "BaseBdev2", 00:17:59.319 "uuid": "71f3b427-53f7-4808-a066-58dd875d4010", 00:17:59.319 "is_configured": true, 00:17:59.319 "data_offset": 2048, 00:17:59.319 "data_size": 63488 00:17:59.319 }, 00:17:59.319 { 00:17:59.319 "name": "BaseBdev3", 00:17:59.319 "uuid": "42860019-b251-4e1e-b229-2935dc83823f", 00:17:59.319 "is_configured": true, 00:17:59.319 "data_offset": 2048, 00:17:59.319 "data_size": 63488 00:17:59.319 } 00:17:59.319 ] 00:17:59.319 }' 00:17:59.319 13:41:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.319 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:17:59.886 13:41:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:00.145 [2024-07-10 13:41:39.358408] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.145 13:41:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.403 13:41:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.403 "name": "Existed_Raid", 00:18:00.403 "uuid": "230f03ba-bd6c-4c8b-b42e-a3d2deddb8c1", 00:18:00.403 "strip_size_kb": 0, 00:18:00.403 "state": "online", 00:18:00.403 "raid_level": "raid1", 00:18:00.403 "superblock": true, 00:18:00.403 "num_base_bdevs": 3, 00:18:00.403 "num_base_bdevs_discovered": 2, 00:18:00.403 "num_base_bdevs_operational": 2, 00:18:00.403 "base_bdevs_list": [ 00:18:00.403 { 00:18:00.403 "name": null, 00:18:00.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.403 "is_configured": false, 00:18:00.403 "data_offset": 2048, 00:18:00.403 "data_size": 63488 00:18:00.403 }, 00:18:00.403 { 00:18:00.403 "name": "BaseBdev2", 00:18:00.403 "uuid": "71f3b427-53f7-4808-a066-58dd875d4010", 00:18:00.403 "is_configured": true, 00:18:00.403 "data_offset": 2048, 00:18:00.403 "data_size": 63488 00:18:00.403 }, 00:18:00.403 { 00:18:00.403 "name": "BaseBdev3", 00:18:00.403 "uuid": "42860019-b251-4e1e-b229-2935dc83823f", 00:18:00.403 "is_configured": true, 00:18:00.403 "data_offset": 2048, 00:18:00.403 "data_size": 63488 00:18:00.403 } 00:18:00.403 ] 00:18:00.403 }' 00:18:00.403 13:41:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.403 13:41:39 -- common/autotest_common.sh@10 -- # set +x 00:18:00.968 13:41:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:00.968 13:41:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:00.968 13:41:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.968 13:41:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.227 13:41:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.227 13:41:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.227 13:41:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:01.227 [2024-07-10 13:41:40.570573] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.486 13:41:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:01.486 13:41:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.486 13:41:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.486 13:41:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.744 13:41:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.744 13:41:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.744 13:41:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:01.744 [2024-07-10 13:41:41.073629] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.744 [2024-07-10 13:41:41.073741] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.744 [2024-07-10 13:41:41.073826] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.010 [2024-07-10 13:41:41.190676] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.010 [2024-07-10 13:41:41.190826] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:02.010 13:41:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.010 13:41:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.010 13:41:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.010 13:41:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.276 13:41:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:02.276 13:41:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:02.276 13:41:41 -- bdev/bdev_raid.sh@287 -- # killprocess 120624 00:18:02.276 13:41:41 -- common/autotest_common.sh@926 -- # '[' -z 120624 ']' 00:18:02.276 13:41:41 -- common/autotest_common.sh@930 -- # kill -0 120624 00:18:02.276 13:41:41 -- common/autotest_common.sh@931 -- # uname 00:18:02.276 13:41:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.276 13:41:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120624 00:18:02.276 killing process with pid 120624 00:18:02.276 13:41:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.276 13:41:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.276 13:41:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120624' 00:18:02.276 13:41:41 -- common/autotest_common.sh@945 -- # kill 120624 00:18:02.276 13:41:41 -- common/autotest_common.sh@950 -- # wait 120624 00:18:02.276 [2024-07-10 13:41:41.428467] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.276 [2024-07-10 13:41:41.428611] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.655 ************************************ 00:18:03.655 END TEST raid_state_function_test_sb 00:18:03.655 ************************************ 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:03.655 00:18:03.655 real 0m11.983s 00:18:03.655 user 0m20.417s 00:18:03.655 sys 0m1.496s 00:18:03.655 13:41:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.655 13:41:42 -- common/autotest_common.sh@10 -- # set +x 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:03.655 13:41:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:03.655 13:41:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.655 13:41:42 -- common/autotest_common.sh@10 -- # set +x 00:18:03.655 ************************************ 00:18:03.655 START TEST raid_superblock_test 00:18:03.655 ************************************ 00:18:03.655 13:41:42 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=121025 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:03.655 13:41:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121025 /var/tmp/spdk-raid.sock 00:18:03.655 13:41:42 -- common/autotest_common.sh@819 -- # '[' -z 121025 ']' 00:18:03.655 13:41:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.655 13:41:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.655 13:41:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.655 13:41:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.655 13:41:42 -- common/autotest_common.sh@10 -- # set +x 00:18:03.655 [2024-07-10 13:41:42.998183] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:03.655 [2024-07-10 13:41:42.998404] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121025 ] 00:18:03.914 [2024-07-10 13:41:43.140399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.173 [2024-07-10 13:41:43.400786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.433 [2024-07-10 13:41:43.640656] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.692 13:41:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.692 13:41:43 -- common/autotest_common.sh@852 -- # return 0 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.692 13:41:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:04.951 malloc1 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.951 [2024-07-10 13:41:44.272299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.951 [2024-07-10 13:41:44.272464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.951 [2024-07-10 13:41:44.272521] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:04.951 [2024-07-10 13:41:44.272579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.951 [2024-07-10 13:41:44.274507] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.951 [2024-07-10 13:41:44.274589] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.951 pt1 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:04.951 13:41:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:04.952 13:41:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:05.210 malloc2 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.469 [2024-07-10 13:41:44.750088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.469 [2024-07-10 13:41:44.750252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.469 [2024-07-10 13:41:44.750307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:05.469 [2024-07-10 13:41:44.750376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.469 [2024-07-10 13:41:44.752543] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.469 [2024-07-10 13:41:44.752626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.469 pt2 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:05.469 13:41:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:05.727 malloc3 00:18:05.727 13:41:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:05.987 [2024-07-10 13:41:45.247022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:05.987 [2024-07-10 13:41:45.247145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.987 [2024-07-10 13:41:45.247212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:05.987 [2024-07-10 13:41:45.247275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.987 [2024-07-10 13:41:45.249406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.987 [2024-07-10 13:41:45.249487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:05.987 pt3 00:18:05.987 13:41:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:05.987 13:41:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:05.987 13:41:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:06.247 [2024-07-10 13:41:45.442788] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.247 [2024-07-10 13:41:45.444580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.247 [2024-07-10 13:41:45.444695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:06.247 [2024-07-10 13:41:45.444876] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:06.247 [2024-07-10 13:41:45.444949] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:06.247 [2024-07-10 13:41:45.445103] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:06.247 [2024-07-10 13:41:45.445447] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:06.247 [2024-07-10 13:41:45.445490] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:18:06.247 [2024-07-10 13:41:45.445668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.247 13:41:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.506 13:41:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.506 "name": "raid_bdev1", 00:18:06.506 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:06.506 "strip_size_kb": 0, 00:18:06.506 "state": "online", 00:18:06.506 "raid_level": "raid1", 00:18:06.506 "superblock": true, 00:18:06.506 "num_base_bdevs": 3, 00:18:06.506 "num_base_bdevs_discovered": 3, 00:18:06.506 "num_base_bdevs_operational": 3, 00:18:06.506 "base_bdevs_list": [ 00:18:06.506 { 00:18:06.506 "name": "pt1", 00:18:06.506 "uuid": "62112e1e-1730-535c-950b-38a112853370", 00:18:06.506 "is_configured": true, 00:18:06.506 "data_offset": 2048, 00:18:06.506 "data_size": 63488 00:18:06.506 }, 00:18:06.506 { 00:18:06.506 "name": "pt2", 00:18:06.506 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:06.506 "is_configured": true, 00:18:06.506 "data_offset": 2048, 00:18:06.506 "data_size": 63488 00:18:06.506 }, 00:18:06.506 { 00:18:06.506 "name": "pt3", 00:18:06.506 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:06.506 "is_configured": true, 00:18:06.506 "data_offset": 2048, 00:18:06.506 "data_size": 63488 00:18:06.506 } 00:18:06.506 ] 00:18:06.506 }' 00:18:06.506 13:41:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.506 13:41:45 -- common/autotest_common.sh@10 -- # set +x 00:18:07.075 13:41:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:07.075 13:41:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:07.334 [2024-07-10 13:41:46.445123] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.334 13:41:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e9b3c3b2-a0f8-440b-a2bd-77be707d87f9 00:18:07.334 13:41:46 -- bdev/bdev_raid.sh@380 -- # '[' -z e9b3c3b2-a0f8-440b-a2bd-77be707d87f9 ']' 00:18:07.334 13:41:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:07.334 [2024-07-10 13:41:46.628596] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.334 [2024-07-10 13:41:46.628676] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.334 [2024-07-10 13:41:46.628779] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.334 [2024-07-10 13:41:46.628864] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.334 [2024-07-10 13:41:46.628901] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:18:07.334 13:41:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.334 13:41:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:07.593 13:41:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:07.593 13:41:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:07.593 13:41:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.593 13:41:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:07.851 13:41:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.851 13:41:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:08.113 13:41:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.113 13:41:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:08.113 13:41:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:08.113 13:41:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:08.408 13:41:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:08.408 13:41:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:08.408 13:41:47 -- common/autotest_common.sh@640 -- # local es=0 00:18:08.408 13:41:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:08.408 13:41:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.408 13:41:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.408 13:41:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.408 13:41:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.408 13:41:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.408 13:41:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.408 13:41:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.408 13:41:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:08.408 13:41:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:08.678 [2024-07-10 13:41:47.834440] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:08.678 [2024-07-10 13:41:47.836158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:08.678 [2024-07-10 13:41:47.836276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:08.678 [2024-07-10 13:41:47.836348] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:08.678 [2024-07-10 13:41:47.836436] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:08.678 [2024-07-10 13:41:47.836487] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:08.678 [2024-07-10 13:41:47.836538] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.678 [2024-07-10 13:41:47.836561] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:18:08.678 request: 00:18:08.678 { 00:18:08.678 "name": "raid_bdev1", 00:18:08.678 "raid_level": "raid1", 00:18:08.678 "base_bdevs": [ 00:18:08.678 "malloc1", 00:18:08.678 "malloc2", 00:18:08.678 "malloc3" 00:18:08.678 ], 00:18:08.678 "superblock": false, 00:18:08.678 "method": "bdev_raid_create", 00:18:08.678 "req_id": 1 00:18:08.678 } 00:18:08.678 Got JSON-RPC error response 00:18:08.678 response: 00:18:08.678 { 00:18:08.678 "code": -17, 00:18:08.678 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:08.678 } 00:18:08.678 13:41:47 -- common/autotest_common.sh@643 -- # es=1 00:18:08.678 13:41:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:08.678 13:41:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:08.678 13:41:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:08.678 13:41:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.678 13:41:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:08.937 [2024-07-10 13:41:48.245660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:08.937 [2024-07-10 13:41:48.245812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.937 [2024-07-10 13:41:48.245863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:08.937 [2024-07-10 13:41:48.245899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.937 [2024-07-10 13:41:48.248030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.937 [2024-07-10 13:41:48.248138] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:08.937 [2024-07-10 13:41:48.248295] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:08.937 [2024-07-10 13:41:48.248394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.937 pt1 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.937 13:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.193 13:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.193 "name": "raid_bdev1", 00:18:09.193 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:09.193 "strip_size_kb": 0, 00:18:09.193 "state": "configuring", 00:18:09.193 "raid_level": "raid1", 00:18:09.193 "superblock": true, 00:18:09.193 "num_base_bdevs": 3, 00:18:09.193 "num_base_bdevs_discovered": 1, 00:18:09.193 "num_base_bdevs_operational": 3, 00:18:09.193 "base_bdevs_list": [ 00:18:09.193 { 00:18:09.193 "name": "pt1", 00:18:09.193 "uuid": "62112e1e-1730-535c-950b-38a112853370", 00:18:09.193 "is_configured": true, 00:18:09.193 "data_offset": 2048, 00:18:09.193 "data_size": 63488 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "name": null, 00:18:09.193 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:09.193 "is_configured": false, 00:18:09.193 "data_offset": 2048, 00:18:09.193 "data_size": 63488 00:18:09.193 }, 00:18:09.193 { 00:18:09.193 "name": null, 00:18:09.193 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:09.193 "is_configured": false, 00:18:09.193 "data_offset": 2048, 00:18:09.193 "data_size": 63488 00:18:09.193 } 00:18:09.193 ] 00:18:09.193 }' 00:18:09.193 13:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.193 13:41:48 -- common/autotest_common.sh@10 -- # set +x 00:18:10.125 13:41:49 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:10.125 13:41:49 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.125 [2024-07-10 13:41:49.295883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.125 [2024-07-10 13:41:49.296064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.125 [2024-07-10 13:41:49.296137] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:10.125 [2024-07-10 13:41:49.296208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.125 [2024-07-10 13:41:49.296740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.125 [2024-07-10 13:41:49.296808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.125 [2024-07-10 13:41:49.296969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:10.125 [2024-07-10 13:41:49.297024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.125 pt2 00:18:10.125 13:41:49 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:10.390 [2024-07-10 13:41:49.499580] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.390 13:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.649 13:41:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.649 "name": "raid_bdev1", 00:18:10.649 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:10.649 "strip_size_kb": 0, 00:18:10.649 "state": "configuring", 00:18:10.649 "raid_level": "raid1", 00:18:10.649 "superblock": true, 00:18:10.649 "num_base_bdevs": 3, 00:18:10.649 "num_base_bdevs_discovered": 1, 00:18:10.649 "num_base_bdevs_operational": 3, 00:18:10.649 "base_bdevs_list": [ 00:18:10.649 { 00:18:10.649 "name": "pt1", 00:18:10.649 "uuid": "62112e1e-1730-535c-950b-38a112853370", 00:18:10.649 "is_configured": true, 00:18:10.649 "data_offset": 2048, 00:18:10.649 "data_size": 63488 00:18:10.649 }, 00:18:10.649 { 00:18:10.649 "name": null, 00:18:10.649 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:10.649 "is_configured": false, 00:18:10.649 "data_offset": 2048, 00:18:10.649 "data_size": 63488 00:18:10.649 }, 00:18:10.649 { 00:18:10.649 "name": null, 00:18:10.649 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:10.649 "is_configured": false, 00:18:10.649 "data_offset": 2048, 00:18:10.649 "data_size": 63488 00:18:10.649 } 00:18:10.649 ] 00:18:10.649 }' 00:18:10.649 13:41:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.649 13:41:49 -- common/autotest_common.sh@10 -- # set +x 00:18:11.216 13:41:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:11.216 13:41:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:11.216 13:41:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.216 [2024-07-10 13:41:50.549667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.216 [2024-07-10 13:41:50.549838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.216 [2024-07-10 13:41:50.549899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:11.216 [2024-07-10 13:41:50.549943] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.216 [2024-07-10 13:41:50.550426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.216 [2024-07-10 13:41:50.550497] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.216 [2024-07-10 13:41:50.550651] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:11.216 [2024-07-10 13:41:50.550706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.216 pt2 00:18:11.216 13:41:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:11.216 13:41:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:11.216 13:41:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:11.474 [2024-07-10 13:41:50.721382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:11.474 [2024-07-10 13:41:50.721524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.475 [2024-07-10 13:41:50.721572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:11.475 [2024-07-10 13:41:50.721617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.475 [2024-07-10 13:41:50.722109] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.475 [2024-07-10 13:41:50.722179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:11.475 [2024-07-10 13:41:50.722331] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:11.475 [2024-07-10 13:41:50.722383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:11.475 [2024-07-10 13:41:50.722537] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:18:11.475 [2024-07-10 13:41:50.722574] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:11.475 [2024-07-10 13:41:50.722716] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:11.475 [2024-07-10 13:41:50.723064] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:18:11.475 [2024-07-10 13:41:50.723110] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:18:11.475 [2024-07-10 13:41:50.723284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.475 pt3 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.475 13:41:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.733 13:41:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.733 "name": "raid_bdev1", 00:18:11.733 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:11.733 "strip_size_kb": 0, 00:18:11.733 "state": "online", 00:18:11.733 "raid_level": "raid1", 00:18:11.733 "superblock": true, 00:18:11.733 "num_base_bdevs": 3, 00:18:11.733 "num_base_bdevs_discovered": 3, 00:18:11.733 "num_base_bdevs_operational": 3, 00:18:11.733 "base_bdevs_list": [ 00:18:11.733 { 00:18:11.733 "name": "pt1", 00:18:11.733 "uuid": "62112e1e-1730-535c-950b-38a112853370", 00:18:11.733 "is_configured": true, 00:18:11.733 "data_offset": 2048, 00:18:11.733 "data_size": 63488 00:18:11.733 }, 00:18:11.733 { 00:18:11.733 "name": "pt2", 00:18:11.734 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:11.734 "is_configured": true, 00:18:11.734 "data_offset": 2048, 00:18:11.734 "data_size": 63488 00:18:11.734 }, 00:18:11.734 { 00:18:11.734 "name": "pt3", 00:18:11.734 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:11.734 "is_configured": true, 00:18:11.734 "data_offset": 2048, 00:18:11.734 "data_size": 63488 00:18:11.734 } 00:18:11.734 ] 00:18:11.734 }' 00:18:11.734 13:41:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.734 13:41:50 -- common/autotest_common.sh@10 -- # set +x 00:18:12.301 13:41:51 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:12.301 13:41:51 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:12.559 [2024-07-10 13:41:51.723796] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.559 13:41:51 -- bdev/bdev_raid.sh@430 -- # '[' e9b3c3b2-a0f8-440b-a2bd-77be707d87f9 '!=' e9b3c3b2-a0f8-440b-a2bd-77be707d87f9 ']' 00:18:12.559 13:41:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:12.559 13:41:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:12.559 13:41:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:12.559 13:41:51 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:12.559 [2024-07-10 13:41:51.911324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.818 13:41:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.818 13:41:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.818 "name": "raid_bdev1", 00:18:12.818 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:12.818 "strip_size_kb": 0, 00:18:12.818 "state": "online", 00:18:12.818 "raid_level": "raid1", 00:18:12.818 "superblock": true, 00:18:12.818 "num_base_bdevs": 3, 00:18:12.818 "num_base_bdevs_discovered": 2, 00:18:12.818 "num_base_bdevs_operational": 2, 00:18:12.818 "base_bdevs_list": [ 00:18:12.818 { 00:18:12.818 "name": null, 00:18:12.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.818 "is_configured": false, 00:18:12.818 "data_offset": 2048, 00:18:12.818 "data_size": 63488 00:18:12.818 }, 00:18:12.818 { 00:18:12.818 "name": "pt2", 00:18:12.818 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:12.818 "is_configured": true, 00:18:12.818 "data_offset": 2048, 00:18:12.818 "data_size": 63488 00:18:12.818 }, 00:18:12.818 { 00:18:12.818 "name": "pt3", 00:18:12.818 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:12.818 "is_configured": true, 00:18:12.818 "data_offset": 2048, 00:18:12.818 "data_size": 63488 00:18:12.818 } 00:18:12.818 ] 00:18:12.818 }' 00:18:12.818 13:41:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.818 13:41:52 -- common/autotest_common.sh@10 -- # set +x 00:18:13.754 13:41:52 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:13.754 [2024-07-10 13:41:52.921484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.754 [2024-07-10 13:41:52.921572] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.754 [2024-07-10 13:41:52.921652] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.754 [2024-07-10 13:41:52.921719] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.754 [2024-07-10 13:41:52.921737] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:13.754 13:41:52 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.754 13:41:52 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:14.015 13:41:53 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:14.274 13:41:53 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:14.274 13:41:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:14.274 13:41:53 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:14.274 13:41:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:14.274 13:41:53 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.534 [2024-07-10 13:41:53.648188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.534 [2024-07-10 13:41:53.648317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.534 [2024-07-10 13:41:53.648360] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:14.534 [2024-07-10 13:41:53.648394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.534 [2024-07-10 13:41:53.650369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.534 [2024-07-10 13:41:53.650445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.534 [2024-07-10 13:41:53.650583] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:14.534 [2024-07-10 13:41:53.650686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.534 pt2 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.534 "name": "raid_bdev1", 00:18:14.534 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:14.534 "strip_size_kb": 0, 00:18:14.534 "state": "configuring", 00:18:14.534 "raid_level": "raid1", 00:18:14.534 "superblock": true, 00:18:14.534 "num_base_bdevs": 3, 00:18:14.534 "num_base_bdevs_discovered": 1, 00:18:14.534 "num_base_bdevs_operational": 2, 00:18:14.534 "base_bdevs_list": [ 00:18:14.534 { 00:18:14.534 "name": null, 00:18:14.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.534 "is_configured": false, 00:18:14.534 "data_offset": 2048, 00:18:14.534 "data_size": 63488 00:18:14.534 }, 00:18:14.534 { 00:18:14.534 "name": "pt2", 00:18:14.534 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:14.534 "is_configured": true, 00:18:14.534 "data_offset": 2048, 00:18:14.534 "data_size": 63488 00:18:14.534 }, 00:18:14.534 { 00:18:14.534 "name": null, 00:18:14.534 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:14.534 "is_configured": false, 00:18:14.534 "data_offset": 2048, 00:18:14.534 "data_size": 63488 00:18:14.534 } 00:18:14.534 ] 00:18:14.534 }' 00:18:14.534 13:41:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.534 13:41:53 -- common/autotest_common.sh@10 -- # set +x 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:15.487 [2024-07-10 13:41:54.678415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:15.487 [2024-07-10 13:41:54.678593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.487 [2024-07-10 13:41:54.678646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:15.487 [2024-07-10 13:41:54.678706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.487 [2024-07-10 13:41:54.679175] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.487 [2024-07-10 13:41:54.679233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:15.487 [2024-07-10 13:41:54.679384] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:15.487 [2024-07-10 13:41:54.679431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:15.487 [2024-07-10 13:41:54.679551] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:18:15.487 [2024-07-10 13:41:54.679582] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:15.487 [2024-07-10 13:41:54.679699] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:15.487 [2024-07-10 13:41:54.680013] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:18:15.487 [2024-07-10 13:41:54.680055] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:18:15.487 [2024-07-10 13:41:54.680239] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.487 pt3 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.487 13:41:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.747 13:41:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.747 "name": "raid_bdev1", 00:18:15.747 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:15.747 "strip_size_kb": 0, 00:18:15.747 "state": "online", 00:18:15.747 "raid_level": "raid1", 00:18:15.747 "superblock": true, 00:18:15.747 "num_base_bdevs": 3, 00:18:15.747 "num_base_bdevs_discovered": 2, 00:18:15.747 "num_base_bdevs_operational": 2, 00:18:15.747 "base_bdevs_list": [ 00:18:15.747 { 00:18:15.747 "name": null, 00:18:15.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.747 "is_configured": false, 00:18:15.747 "data_offset": 2048, 00:18:15.747 "data_size": 63488 00:18:15.747 }, 00:18:15.747 { 00:18:15.747 "name": "pt2", 00:18:15.747 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:15.747 "is_configured": true, 00:18:15.747 "data_offset": 2048, 00:18:15.747 "data_size": 63488 00:18:15.747 }, 00:18:15.747 { 00:18:15.747 "name": "pt3", 00:18:15.747 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:15.747 "is_configured": true, 00:18:15.747 "data_offset": 2048, 00:18:15.747 "data_size": 63488 00:18:15.747 } 00:18:15.747 ] 00:18:15.747 }' 00:18:15.747 13:41:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.747 13:41:54 -- common/autotest_common.sh@10 -- # set +x 00:18:16.316 13:41:55 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:16.316 13:41:55 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:16.316 [2024-07-10 13:41:55.652582] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.316 [2024-07-10 13:41:55.652654] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.316 [2024-07-10 13:41:55.652747] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.317 [2024-07-10 13:41:55.652829] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.317 [2024-07-10 13:41:55.652858] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:18:16.576 13:41:55 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.576 13:41:55 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:16.576 13:41:55 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:16.576 13:41:55 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:16.576 13:41:55 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:16.835 [2024-07-10 13:41:56.004198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:16.835 [2024-07-10 13:41:56.004357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.835 [2024-07-10 13:41:56.004408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:16.835 [2024-07-10 13:41:56.004441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.835 [2024-07-10 13:41:56.006371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.835 [2024-07-10 13:41:56.006450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:16.835 [2024-07-10 13:41:56.006592] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:16.835 [2024-07-10 13:41:56.006703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:16.835 pt1 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.835 13:41:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.095 13:41:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.095 "name": "raid_bdev1", 00:18:17.095 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:17.095 "strip_size_kb": 0, 00:18:17.095 "state": "configuring", 00:18:17.095 "raid_level": "raid1", 00:18:17.095 "superblock": true, 00:18:17.095 "num_base_bdevs": 3, 00:18:17.095 "num_base_bdevs_discovered": 1, 00:18:17.095 "num_base_bdevs_operational": 3, 00:18:17.095 "base_bdevs_list": [ 00:18:17.095 { 00:18:17.095 "name": "pt1", 00:18:17.095 "uuid": "62112e1e-1730-535c-950b-38a112853370", 00:18:17.095 "is_configured": true, 00:18:17.095 "data_offset": 2048, 00:18:17.095 "data_size": 63488 00:18:17.095 }, 00:18:17.095 { 00:18:17.095 "name": null, 00:18:17.095 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:17.095 "is_configured": false, 00:18:17.095 "data_offset": 2048, 00:18:17.095 "data_size": 63488 00:18:17.095 }, 00:18:17.095 { 00:18:17.095 "name": null, 00:18:17.095 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:17.095 "is_configured": false, 00:18:17.095 "data_offset": 2048, 00:18:17.095 "data_size": 63488 00:18:17.095 } 00:18:17.095 ] 00:18:17.095 }' 00:18:17.095 13:41:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.095 13:41:56 -- common/autotest_common.sh@10 -- # set +x 00:18:17.664 13:41:56 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:17.664 13:41:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:17.664 13:41:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:17.923 13:41:57 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:18.184 [2024-07-10 13:41:57.396223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:18.184 [2024-07-10 13:41:57.396396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.184 [2024-07-10 13:41:57.396450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:18.184 [2024-07-10 13:41:57.396525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.184 [2024-07-10 13:41:57.397072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.184 [2024-07-10 13:41:57.397158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:18.184 [2024-07-10 13:41:57.397340] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:18.184 [2024-07-10 13:41:57.397379] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:18.184 [2024-07-10 13:41:57.397409] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.184 [2024-07-10 13:41:57.397460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:18:18.184 [2024-07-10 13:41:57.397576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.184 pt3 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.184 13:41:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.443 13:41:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.443 "name": "raid_bdev1", 00:18:18.443 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:18.443 "strip_size_kb": 0, 00:18:18.443 "state": "configuring", 00:18:18.443 "raid_level": "raid1", 00:18:18.443 "superblock": true, 00:18:18.443 "num_base_bdevs": 3, 00:18:18.443 "num_base_bdevs_discovered": 1, 00:18:18.443 "num_base_bdevs_operational": 2, 00:18:18.443 "base_bdevs_list": [ 00:18:18.443 { 00:18:18.443 "name": null, 00:18:18.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.443 "is_configured": false, 00:18:18.443 "data_offset": 2048, 00:18:18.443 "data_size": 63488 00:18:18.443 }, 00:18:18.443 { 00:18:18.443 "name": null, 00:18:18.443 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:18.443 "is_configured": false, 00:18:18.443 "data_offset": 2048, 00:18:18.443 "data_size": 63488 00:18:18.443 }, 00:18:18.443 { 00:18:18.443 "name": "pt3", 00:18:18.443 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:18.443 "is_configured": true, 00:18:18.443 "data_offset": 2048, 00:18:18.443 "data_size": 63488 00:18:18.443 } 00:18:18.443 ] 00:18:18.443 }' 00:18:18.443 13:41:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.443 13:41:57 -- common/autotest_common.sh@10 -- # set +x 00:18:19.013 13:41:58 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:19.013 13:41:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:19.013 13:41:58 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.273 [2024-07-10 13:41:58.439342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.273 [2024-07-10 13:41:58.439515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.273 [2024-07-10 13:41:58.439561] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:19.273 [2024-07-10 13:41:58.439622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.273 [2024-07-10 13:41:58.440172] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.273 [2024-07-10 13:41:58.440247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.273 [2024-07-10 13:41:58.440395] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:19.273 [2024-07-10 13:41:58.440443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.273 [2024-07-10 13:41:58.440585] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:18:19.273 [2024-07-10 13:41:58.440620] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:19.273 [2024-07-10 13:41:58.440763] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:19.273 [2024-07-10 13:41:58.441102] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:18:19.273 [2024-07-10 13:41:58.441146] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:18:19.273 [2024-07-10 13:41:58.441303] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.273 pt2 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.273 13:41:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.533 13:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.533 "name": "raid_bdev1", 00:18:19.533 "uuid": "e9b3c3b2-a0f8-440b-a2bd-77be707d87f9", 00:18:19.533 "strip_size_kb": 0, 00:18:19.533 "state": "online", 00:18:19.533 "raid_level": "raid1", 00:18:19.533 "superblock": true, 00:18:19.533 "num_base_bdevs": 3, 00:18:19.533 "num_base_bdevs_discovered": 2, 00:18:19.533 "num_base_bdevs_operational": 2, 00:18:19.533 "base_bdevs_list": [ 00:18:19.533 { 00:18:19.533 "name": null, 00:18:19.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.533 "is_configured": false, 00:18:19.533 "data_offset": 2048, 00:18:19.533 "data_size": 63488 00:18:19.533 }, 00:18:19.533 { 00:18:19.533 "name": "pt2", 00:18:19.533 "uuid": "ab99d494-8787-5fd7-908e-4f3fe91992d7", 00:18:19.533 "is_configured": true, 00:18:19.533 "data_offset": 2048, 00:18:19.533 "data_size": 63488 00:18:19.533 }, 00:18:19.533 { 00:18:19.533 "name": "pt3", 00:18:19.533 "uuid": "5b99d361-dd95-5cb1-95fe-301ceb0febae", 00:18:19.533 "is_configured": true, 00:18:19.533 "data_offset": 2048, 00:18:19.533 "data_size": 63488 00:18:19.533 } 00:18:19.533 ] 00:18:19.533 }' 00:18:19.533 13:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.533 13:41:58 -- common/autotest_common.sh@10 -- # set +x 00:18:20.112 13:41:59 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:20.112 13:41:59 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:20.112 [2024-07-10 13:41:59.441911] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.112 13:41:59 -- bdev/bdev_raid.sh@506 -- # '[' e9b3c3b2-a0f8-440b-a2bd-77be707d87f9 '!=' e9b3c3b2-a0f8-440b-a2bd-77be707d87f9 ']' 00:18:20.112 13:41:59 -- bdev/bdev_raid.sh@511 -- # killprocess 121025 00:18:20.112 13:41:59 -- common/autotest_common.sh@926 -- # '[' -z 121025 ']' 00:18:20.112 13:41:59 -- common/autotest_common.sh@930 -- # kill -0 121025 00:18:20.112 13:41:59 -- common/autotest_common.sh@931 -- # uname 00:18:20.372 13:41:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.372 13:41:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121025 00:18:20.372 killing process with pid 121025 00:18:20.372 13:41:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:20.372 13:41:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:20.372 13:41:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121025' 00:18:20.372 13:41:59 -- common/autotest_common.sh@945 -- # kill 121025 00:18:20.372 13:41:59 -- common/autotest_common.sh@950 -- # wait 121025 00:18:20.372 [2024-07-10 13:41:59.483753] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.372 [2024-07-10 13:41:59.483849] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.372 [2024-07-10 13:41:59.483960] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.372 [2024-07-10 13:41:59.483996] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:18:20.674 [2024-07-10 13:41:59.780434] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.077 ************************************ 00:18:22.077 END TEST raid_superblock_test 00:18:22.077 ************************************ 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:22.077 00:18:22.077 real 0m18.112s 00:18:22.077 user 0m32.898s 00:18:22.077 sys 0m2.153s 00:18:22.077 13:42:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.077 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:22.077 13:42:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:22.077 13:42:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:22.077 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:18:22.077 ************************************ 00:18:22.077 START TEST raid_state_function_test 00:18:22.077 ************************************ 00:18:22.077 13:42:01 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=121644 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121644' 00:18:22.077 Process raid pid: 121644 00:18:22.077 13:42:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121644 /var/tmp/spdk-raid.sock 00:18:22.077 13:42:01 -- common/autotest_common.sh@819 -- # '[' -z 121644 ']' 00:18:22.077 13:42:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.077 13:42:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:22.077 13:42:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.077 13:42:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:22.077 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:18:22.077 [2024-07-10 13:42:01.182552] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:22.077 [2024-07-10 13:42:01.182759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.077 [2024-07-10 13:42:01.342419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.336 [2024-07-10 13:42:01.532823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.594 [2024-07-10 13:42:01.727061] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.853 13:42:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:22.853 13:42:01 -- common/autotest_common.sh@852 -- # return 0 00:18:22.853 13:42:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:22.853 [2024-07-10 13:42:02.142400] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.853 [2024-07-10 13:42:02.142542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.853 [2024-07-10 13:42:02.142582] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.853 [2024-07-10 13:42:02.142615] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.853 [2024-07-10 13:42:02.142633] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.853 [2024-07-10 13:42:02.142681] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.853 [2024-07-10 13:42:02.142729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:22.853 [2024-07-10 13:42:02.142767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:22.853 13:42:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:22.853 13:42:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.853 13:42:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.854 13:42:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.113 13:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.113 "name": "Existed_Raid", 00:18:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.113 "strip_size_kb": 64, 00:18:23.113 "state": "configuring", 00:18:23.113 "raid_level": "raid0", 00:18:23.113 "superblock": false, 00:18:23.113 "num_base_bdevs": 4, 00:18:23.113 "num_base_bdevs_discovered": 0, 00:18:23.113 "num_base_bdevs_operational": 4, 00:18:23.113 "base_bdevs_list": [ 00:18:23.113 { 00:18:23.113 "name": "BaseBdev1", 00:18:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.113 "is_configured": false, 00:18:23.113 "data_offset": 0, 00:18:23.113 "data_size": 0 00:18:23.113 }, 00:18:23.113 { 00:18:23.113 "name": "BaseBdev2", 00:18:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.113 "is_configured": false, 00:18:23.113 "data_offset": 0, 00:18:23.113 "data_size": 0 00:18:23.113 }, 00:18:23.113 { 00:18:23.113 "name": "BaseBdev3", 00:18:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.113 "is_configured": false, 00:18:23.113 "data_offset": 0, 00:18:23.113 "data_size": 0 00:18:23.113 }, 00:18:23.113 { 00:18:23.113 "name": "BaseBdev4", 00:18:23.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.113 "is_configured": false, 00:18:23.113 "data_offset": 0, 00:18:23.113 "data_size": 0 00:18:23.113 } 00:18:23.113 ] 00:18:23.113 }' 00:18:23.113 13:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.113 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:18:23.681 13:42:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.939 [2024-07-10 13:42:03.052748] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.939 [2024-07-10 13:42:03.052835] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:23.939 13:42:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:23.939 [2024-07-10 13:42:03.228488] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.939 [2024-07-10 13:42:03.228602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.939 [2024-07-10 13:42:03.228637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.939 [2024-07-10 13:42:03.228693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.939 [2024-07-10 13:42:03.228720] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.939 [2024-07-10 13:42:03.228770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.939 [2024-07-10 13:42:03.228810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.939 [2024-07-10 13:42:03.228867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.939 13:42:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.198 [2024-07-10 13:42:03.439043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.198 BaseBdev1 00:18:24.198 13:42:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:24.198 13:42:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:24.198 13:42:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:24.198 13:42:03 -- common/autotest_common.sh@889 -- # local i 00:18:24.198 13:42:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:24.198 13:42:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:24.198 13:42:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.456 13:42:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.456 [ 00:18:24.456 { 00:18:24.456 "name": "BaseBdev1", 00:18:24.456 "aliases": [ 00:18:24.456 "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf" 00:18:24.456 ], 00:18:24.456 "product_name": "Malloc disk", 00:18:24.456 "block_size": 512, 00:18:24.456 "num_blocks": 65536, 00:18:24.456 "uuid": "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf", 00:18:24.456 "assigned_rate_limits": { 00:18:24.456 "rw_ios_per_sec": 0, 00:18:24.456 "rw_mbytes_per_sec": 0, 00:18:24.456 "r_mbytes_per_sec": 0, 00:18:24.456 "w_mbytes_per_sec": 0 00:18:24.456 }, 00:18:24.456 "claimed": true, 00:18:24.456 "claim_type": "exclusive_write", 00:18:24.456 "zoned": false, 00:18:24.456 "supported_io_types": { 00:18:24.456 "read": true, 00:18:24.456 "write": true, 00:18:24.456 "unmap": true, 00:18:24.456 "write_zeroes": true, 00:18:24.456 "flush": true, 00:18:24.456 "reset": true, 00:18:24.456 "compare": false, 00:18:24.457 "compare_and_write": false, 00:18:24.457 "abort": true, 00:18:24.457 "nvme_admin": false, 00:18:24.457 "nvme_io": false 00:18:24.457 }, 00:18:24.457 "memory_domains": [ 00:18:24.457 { 00:18:24.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.457 "dma_device_type": 2 00:18:24.457 } 00:18:24.457 ], 00:18:24.457 "driver_specific": {} 00:18:24.457 } 00:18:24.457 ] 00:18:24.457 13:42:03 -- common/autotest_common.sh@895 -- # return 0 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.457 13:42:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.716 13:42:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.716 "name": "Existed_Raid", 00:18:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.716 "strip_size_kb": 64, 00:18:24.716 "state": "configuring", 00:18:24.716 "raid_level": "raid0", 00:18:24.716 "superblock": false, 00:18:24.716 "num_base_bdevs": 4, 00:18:24.716 "num_base_bdevs_discovered": 1, 00:18:24.716 "num_base_bdevs_operational": 4, 00:18:24.716 "base_bdevs_list": [ 00:18:24.716 { 00:18:24.716 "name": "BaseBdev1", 00:18:24.716 "uuid": "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf", 00:18:24.716 "is_configured": true, 00:18:24.716 "data_offset": 0, 00:18:24.716 "data_size": 65536 00:18:24.716 }, 00:18:24.716 { 00:18:24.716 "name": "BaseBdev2", 00:18:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.716 "is_configured": false, 00:18:24.716 "data_offset": 0, 00:18:24.716 "data_size": 0 00:18:24.716 }, 00:18:24.716 { 00:18:24.716 "name": "BaseBdev3", 00:18:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.716 "is_configured": false, 00:18:24.716 "data_offset": 0, 00:18:24.716 "data_size": 0 00:18:24.716 }, 00:18:24.716 { 00:18:24.716 "name": "BaseBdev4", 00:18:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.716 "is_configured": false, 00:18:24.716 "data_offset": 0, 00:18:24.716 "data_size": 0 00:18:24.716 } 00:18:24.716 ] 00:18:24.716 }' 00:18:24.716 13:42:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.716 13:42:03 -- common/autotest_common.sh@10 -- # set +x 00:18:25.284 13:42:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.544 [2024-07-10 13:42:04.696932] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.544 [2024-07-10 13:42:04.697039] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:25.544 [2024-07-10 13:42:04.876676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.544 [2024-07-10 13:42:04.878401] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.544 [2024-07-10 13:42:04.878510] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.544 [2024-07-10 13:42:04.878551] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.544 [2024-07-10 13:42:04.878603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.544 [2024-07-10 13:42:04.878639] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:25.544 [2024-07-10 13:42:04.878672] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.544 13:42:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.803 13:42:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.803 "name": "Existed_Raid", 00:18:25.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.803 "strip_size_kb": 64, 00:18:25.803 "state": "configuring", 00:18:25.803 "raid_level": "raid0", 00:18:25.803 "superblock": false, 00:18:25.803 "num_base_bdevs": 4, 00:18:25.803 "num_base_bdevs_discovered": 1, 00:18:25.803 "num_base_bdevs_operational": 4, 00:18:25.803 "base_bdevs_list": [ 00:18:25.803 { 00:18:25.803 "name": "BaseBdev1", 00:18:25.803 "uuid": "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf", 00:18:25.803 "is_configured": true, 00:18:25.803 "data_offset": 0, 00:18:25.803 "data_size": 65536 00:18:25.803 }, 00:18:25.803 { 00:18:25.803 "name": "BaseBdev2", 00:18:25.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.803 "is_configured": false, 00:18:25.803 "data_offset": 0, 00:18:25.803 "data_size": 0 00:18:25.803 }, 00:18:25.803 { 00:18:25.803 "name": "BaseBdev3", 00:18:25.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.803 "is_configured": false, 00:18:25.803 "data_offset": 0, 00:18:25.803 "data_size": 0 00:18:25.803 }, 00:18:25.803 { 00:18:25.803 "name": "BaseBdev4", 00:18:25.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.803 "is_configured": false, 00:18:25.803 "data_offset": 0, 00:18:25.803 "data_size": 0 00:18:25.803 } 00:18:25.803 ] 00:18:25.803 }' 00:18:25.803 13:42:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.803 13:42:05 -- common/autotest_common.sh@10 -- # set +x 00:18:26.371 13:42:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:26.653 [2024-07-10 13:42:05.901123] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.653 BaseBdev2 00:18:26.653 13:42:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:26.653 13:42:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:26.653 13:42:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.653 13:42:05 -- common/autotest_common.sh@889 -- # local i 00:18:26.653 13:42:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.653 13:42:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.653 13:42:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.933 13:42:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:27.191 [ 00:18:27.191 { 00:18:27.191 "name": "BaseBdev2", 00:18:27.191 "aliases": [ 00:18:27.191 "5178401c-6fda-4ac2-bf04-a0d696ce59cf" 00:18:27.191 ], 00:18:27.191 "product_name": "Malloc disk", 00:18:27.191 "block_size": 512, 00:18:27.191 "num_blocks": 65536, 00:18:27.191 "uuid": "5178401c-6fda-4ac2-bf04-a0d696ce59cf", 00:18:27.191 "assigned_rate_limits": { 00:18:27.191 "rw_ios_per_sec": 0, 00:18:27.191 "rw_mbytes_per_sec": 0, 00:18:27.191 "r_mbytes_per_sec": 0, 00:18:27.191 "w_mbytes_per_sec": 0 00:18:27.191 }, 00:18:27.191 "claimed": true, 00:18:27.191 "claim_type": "exclusive_write", 00:18:27.191 "zoned": false, 00:18:27.191 "supported_io_types": { 00:18:27.191 "read": true, 00:18:27.191 "write": true, 00:18:27.191 "unmap": true, 00:18:27.191 "write_zeroes": true, 00:18:27.191 "flush": true, 00:18:27.191 "reset": true, 00:18:27.191 "compare": false, 00:18:27.191 "compare_and_write": false, 00:18:27.191 "abort": true, 00:18:27.191 "nvme_admin": false, 00:18:27.191 "nvme_io": false 00:18:27.191 }, 00:18:27.191 "memory_domains": [ 00:18:27.191 { 00:18:27.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.191 "dma_device_type": 2 00:18:27.191 } 00:18:27.191 ], 00:18:27.191 "driver_specific": {} 00:18:27.191 } 00:18:27.191 ] 00:18:27.191 13:42:06 -- common/autotest_common.sh@895 -- # return 0 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.191 13:42:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.191 "name": "Existed_Raid", 00:18:27.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.191 "strip_size_kb": 64, 00:18:27.191 "state": "configuring", 00:18:27.191 "raid_level": "raid0", 00:18:27.191 "superblock": false, 00:18:27.191 "num_base_bdevs": 4, 00:18:27.191 "num_base_bdevs_discovered": 2, 00:18:27.191 "num_base_bdevs_operational": 4, 00:18:27.191 "base_bdevs_list": [ 00:18:27.191 { 00:18:27.191 "name": "BaseBdev1", 00:18:27.191 "uuid": "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf", 00:18:27.191 "is_configured": true, 00:18:27.191 "data_offset": 0, 00:18:27.191 "data_size": 65536 00:18:27.191 }, 00:18:27.191 { 00:18:27.191 "name": "BaseBdev2", 00:18:27.191 "uuid": "5178401c-6fda-4ac2-bf04-a0d696ce59cf", 00:18:27.191 "is_configured": true, 00:18:27.191 "data_offset": 0, 00:18:27.191 "data_size": 65536 00:18:27.191 }, 00:18:27.192 { 00:18:27.192 "name": "BaseBdev3", 00:18:27.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.192 "is_configured": false, 00:18:27.192 "data_offset": 0, 00:18:27.192 "data_size": 0 00:18:27.192 }, 00:18:27.192 { 00:18:27.192 "name": "BaseBdev4", 00:18:27.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.192 "is_configured": false, 00:18:27.192 "data_offset": 0, 00:18:27.192 "data_size": 0 00:18:27.192 } 00:18:27.192 ] 00:18:27.192 }' 00:18:27.192 13:42:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.192 13:42:06 -- common/autotest_common.sh@10 -- # set +x 00:18:27.760 13:42:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.019 [2024-07-10 13:42:07.262767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.019 BaseBdev3 00:18:28.019 13:42:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:28.019 13:42:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:28.019 13:42:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:28.019 13:42:07 -- common/autotest_common.sh@889 -- # local i 00:18:28.019 13:42:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:28.019 13:42:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:28.019 13:42:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.278 13:42:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.278 [ 00:18:28.278 { 00:18:28.278 "name": "BaseBdev3", 00:18:28.278 "aliases": [ 00:18:28.278 "2af6c852-7cad-4aa7-93ee-337e0da8b2be" 00:18:28.278 ], 00:18:28.278 "product_name": "Malloc disk", 00:18:28.278 "block_size": 512, 00:18:28.278 "num_blocks": 65536, 00:18:28.278 "uuid": "2af6c852-7cad-4aa7-93ee-337e0da8b2be", 00:18:28.278 "assigned_rate_limits": { 00:18:28.278 "rw_ios_per_sec": 0, 00:18:28.278 "rw_mbytes_per_sec": 0, 00:18:28.278 "r_mbytes_per_sec": 0, 00:18:28.278 "w_mbytes_per_sec": 0 00:18:28.278 }, 00:18:28.278 "claimed": true, 00:18:28.278 "claim_type": "exclusive_write", 00:18:28.278 "zoned": false, 00:18:28.278 "supported_io_types": { 00:18:28.278 "read": true, 00:18:28.278 "write": true, 00:18:28.278 "unmap": true, 00:18:28.278 "write_zeroes": true, 00:18:28.278 "flush": true, 00:18:28.278 "reset": true, 00:18:28.278 "compare": false, 00:18:28.278 "compare_and_write": false, 00:18:28.278 "abort": true, 00:18:28.278 "nvme_admin": false, 00:18:28.278 "nvme_io": false 00:18:28.278 }, 00:18:28.278 "memory_domains": [ 00:18:28.278 { 00:18:28.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.278 "dma_device_type": 2 00:18:28.278 } 00:18:28.278 ], 00:18:28.278 "driver_specific": {} 00:18:28.278 } 00:18:28.278 ] 00:18:28.537 13:42:07 -- common/autotest_common.sh@895 -- # return 0 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.537 "name": "Existed_Raid", 00:18:28.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.537 "strip_size_kb": 64, 00:18:28.537 "state": "configuring", 00:18:28.537 "raid_level": "raid0", 00:18:28.537 "superblock": false, 00:18:28.537 "num_base_bdevs": 4, 00:18:28.537 "num_base_bdevs_discovered": 3, 00:18:28.537 "num_base_bdevs_operational": 4, 00:18:28.537 "base_bdevs_list": [ 00:18:28.537 { 00:18:28.537 "name": "BaseBdev1", 00:18:28.537 "uuid": "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 0, 00:18:28.537 "data_size": 65536 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev2", 00:18:28.537 "uuid": "5178401c-6fda-4ac2-bf04-a0d696ce59cf", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 0, 00:18:28.537 "data_size": 65536 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev3", 00:18:28.537 "uuid": "2af6c852-7cad-4aa7-93ee-337e0da8b2be", 00:18:28.537 "is_configured": true, 00:18:28.537 "data_offset": 0, 00:18:28.537 "data_size": 65536 00:18:28.537 }, 00:18:28.537 { 00:18:28.537 "name": "BaseBdev4", 00:18:28.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.537 "is_configured": false, 00:18:28.537 "data_offset": 0, 00:18:28.537 "data_size": 0 00:18:28.537 } 00:18:28.537 ] 00:18:28.537 }' 00:18:28.537 13:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.537 13:42:07 -- common/autotest_common.sh@10 -- # set +x 00:18:29.105 13:42:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:29.364 [2024-07-10 13:42:08.633996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.365 [2024-07-10 13:42:08.634111] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:29.365 [2024-07-10 13:42:08.634131] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:29.365 [2024-07-10 13:42:08.634276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:29.365 [2024-07-10 13:42:08.634575] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:29.365 [2024-07-10 13:42:08.634617] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:29.365 [2024-07-10 13:42:08.634883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.365 BaseBdev4 00:18:29.365 13:42:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:29.365 13:42:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:29.365 13:42:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:29.365 13:42:08 -- common/autotest_common.sh@889 -- # local i 00:18:29.365 13:42:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:29.365 13:42:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:29.365 13:42:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.624 13:42:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:29.883 [ 00:18:29.883 { 00:18:29.883 "name": "BaseBdev4", 00:18:29.883 "aliases": [ 00:18:29.883 "b2f6ea6c-b13d-430e-956e-0383c3f959e5" 00:18:29.883 ], 00:18:29.883 "product_name": "Malloc disk", 00:18:29.883 "block_size": 512, 00:18:29.883 "num_blocks": 65536, 00:18:29.883 "uuid": "b2f6ea6c-b13d-430e-956e-0383c3f959e5", 00:18:29.883 "assigned_rate_limits": { 00:18:29.883 "rw_ios_per_sec": 0, 00:18:29.883 "rw_mbytes_per_sec": 0, 00:18:29.883 "r_mbytes_per_sec": 0, 00:18:29.883 "w_mbytes_per_sec": 0 00:18:29.883 }, 00:18:29.883 "claimed": true, 00:18:29.883 "claim_type": "exclusive_write", 00:18:29.883 "zoned": false, 00:18:29.883 "supported_io_types": { 00:18:29.883 "read": true, 00:18:29.883 "write": true, 00:18:29.883 "unmap": true, 00:18:29.883 "write_zeroes": true, 00:18:29.883 "flush": true, 00:18:29.883 "reset": true, 00:18:29.883 "compare": false, 00:18:29.883 "compare_and_write": false, 00:18:29.883 "abort": true, 00:18:29.883 "nvme_admin": false, 00:18:29.883 "nvme_io": false 00:18:29.883 }, 00:18:29.883 "memory_domains": [ 00:18:29.883 { 00:18:29.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.883 "dma_device_type": 2 00:18:29.883 } 00:18:29.883 ], 00:18:29.883 "driver_specific": {} 00:18:29.883 } 00:18:29.883 ] 00:18:29.883 13:42:09 -- common/autotest_common.sh@895 -- # return 0 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.883 "name": "Existed_Raid", 00:18:29.883 "uuid": "a158a156-26b4-49fb-9696-6f4e9041bacd", 00:18:29.883 "strip_size_kb": 64, 00:18:29.883 "state": "online", 00:18:29.883 "raid_level": "raid0", 00:18:29.883 "superblock": false, 00:18:29.883 "num_base_bdevs": 4, 00:18:29.883 "num_base_bdevs_discovered": 4, 00:18:29.883 "num_base_bdevs_operational": 4, 00:18:29.883 "base_bdevs_list": [ 00:18:29.883 { 00:18:29.883 "name": "BaseBdev1", 00:18:29.883 "uuid": "a92ccb3a-976c-4359-95ad-70c4ea5ad9bf", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 0, 00:18:29.883 "data_size": 65536 00:18:29.883 }, 00:18:29.883 { 00:18:29.883 "name": "BaseBdev2", 00:18:29.883 "uuid": "5178401c-6fda-4ac2-bf04-a0d696ce59cf", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 0, 00:18:29.883 "data_size": 65536 00:18:29.883 }, 00:18:29.883 { 00:18:29.883 "name": "BaseBdev3", 00:18:29.883 "uuid": "2af6c852-7cad-4aa7-93ee-337e0da8b2be", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 0, 00:18:29.883 "data_size": 65536 00:18:29.883 }, 00:18:29.883 { 00:18:29.883 "name": "BaseBdev4", 00:18:29.883 "uuid": "b2f6ea6c-b13d-430e-956e-0383c3f959e5", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 0, 00:18:29.883 "data_size": 65536 00:18:29.883 } 00:18:29.883 ] 00:18:29.883 }' 00:18:29.883 13:42:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.883 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:18:30.454 13:42:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:30.714 [2024-07-10 13:42:09.991954] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.714 [2024-07-10 13:42:09.992057] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.714 [2024-07-10 13:42:09.992165] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.972 "name": "Existed_Raid", 00:18:30.972 "uuid": "a158a156-26b4-49fb-9696-6f4e9041bacd", 00:18:30.972 "strip_size_kb": 64, 00:18:30.972 "state": "offline", 00:18:30.972 "raid_level": "raid0", 00:18:30.972 "superblock": false, 00:18:30.972 "num_base_bdevs": 4, 00:18:30.972 "num_base_bdevs_discovered": 3, 00:18:30.972 "num_base_bdevs_operational": 3, 00:18:30.972 "base_bdevs_list": [ 00:18:30.972 { 00:18:30.972 "name": null, 00:18:30.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.972 "is_configured": false, 00:18:30.972 "data_offset": 0, 00:18:30.972 "data_size": 65536 00:18:30.972 }, 00:18:30.972 { 00:18:30.972 "name": "BaseBdev2", 00:18:30.972 "uuid": "5178401c-6fda-4ac2-bf04-a0d696ce59cf", 00:18:30.972 "is_configured": true, 00:18:30.972 "data_offset": 0, 00:18:30.972 "data_size": 65536 00:18:30.972 }, 00:18:30.972 { 00:18:30.972 "name": "BaseBdev3", 00:18:30.972 "uuid": "2af6c852-7cad-4aa7-93ee-337e0da8b2be", 00:18:30.972 "is_configured": true, 00:18:30.972 "data_offset": 0, 00:18:30.972 "data_size": 65536 00:18:30.972 }, 00:18:30.972 { 00:18:30.972 "name": "BaseBdev4", 00:18:30.972 "uuid": "b2f6ea6c-b13d-430e-956e-0383c3f959e5", 00:18:30.972 "is_configured": true, 00:18:30.972 "data_offset": 0, 00:18:30.972 "data_size": 65536 00:18:30.972 } 00:18:30.972 ] 00:18:30.972 }' 00:18:30.972 13:42:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.972 13:42:10 -- common/autotest_common.sh@10 -- # set +x 00:18:31.540 13:42:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:31.540 13:42:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.540 13:42:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.540 13:42:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:31.798 13:42:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:31.798 13:42:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.798 13:42:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:32.057 [2024-07-10 13:42:11.236361] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:32.057 13:42:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.057 13:42:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.057 13:42:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.057 13:42:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.316 13:42:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.316 13:42:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.316 13:42:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:32.575 [2024-07-10 13:42:11.696868] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:32.575 13:42:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.575 13:42:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.575 13:42:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.575 13:42:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.834 13:42:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.834 13:42:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.834 13:42:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:32.834 [2024-07-10 13:42:12.141325] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:32.834 [2024-07-10 13:42:12.141451] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:33.093 13:42:12 -- bdev/bdev_raid.sh@287 -- # killprocess 121644 00:18:33.093 13:42:12 -- common/autotest_common.sh@926 -- # '[' -z 121644 ']' 00:18:33.093 13:42:12 -- common/autotest_common.sh@930 -- # kill -0 121644 00:18:33.093 13:42:12 -- common/autotest_common.sh@931 -- # uname 00:18:33.093 13:42:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:33.093 13:42:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121644 00:18:33.351 killing process with pid 121644 00:18:33.351 13:42:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:33.351 13:42:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:33.351 13:42:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121644' 00:18:33.351 13:42:12 -- common/autotest_common.sh@945 -- # kill 121644 00:18:33.351 13:42:12 -- common/autotest_common.sh@950 -- # wait 121644 00:18:33.351 [2024-07-10 13:42:12.450208] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.351 [2024-07-10 13:42:12.450327] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.728 ************************************ 00:18:34.728 END TEST raid_state_function_test 00:18:34.728 ************************************ 00:18:34.728 13:42:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:34.728 00:18:34.728 real 0m12.572s 00:18:34.728 user 0m22.065s 00:18:34.728 sys 0m1.444s 00:18:34.728 13:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.728 13:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:34.728 13:42:13 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:34.728 13:42:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:34.728 13:42:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:34.728 13:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:34.728 ************************************ 00:18:34.728 START TEST raid_state_function_test_sb 00:18:34.728 ************************************ 00:18:34.728 13:42:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:34.728 13:42:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:34.728 13:42:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:34.728 13:42:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:34.728 13:42:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:34.729 Process raid pid: 122091 00:18:34.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=122091 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122091' 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122091 /var/tmp/spdk-raid.sock 00:18:34.729 13:42:13 -- common/autotest_common.sh@819 -- # '[' -z 122091 ']' 00:18:34.729 13:42:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:34.729 13:42:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:34.729 13:42:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:34.729 13:42:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:34.729 13:42:13 -- common/autotest_common.sh@10 -- # set +x 00:18:34.729 13:42:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:34.729 [2024-07-10 13:42:13.814971] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:34.729 [2024-07-10 13:42:13.815136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.729 [2024-07-10 13:42:13.970111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.988 [2024-07-10 13:42:14.165143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.248 [2024-07-10 13:42:14.361361] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.507 13:42:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:35.507 13:42:14 -- common/autotest_common.sh@852 -- # return 0 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:35.507 [2024-07-10 13:42:14.781731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:35.507 [2024-07-10 13:42:14.781869] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:35.507 [2024-07-10 13:42:14.781899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.507 [2024-07-10 13:42:14.781927] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.507 [2024-07-10 13:42:14.781942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:35.507 [2024-07-10 13:42:14.781979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:35.507 [2024-07-10 13:42:14.782011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:35.507 [2024-07-10 13:42:14.782055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.507 13:42:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.766 13:42:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.766 "name": "Existed_Raid", 00:18:35.766 "uuid": "049dd012-7041-41ca-a0ba-67143e4403db", 00:18:35.766 "strip_size_kb": 64, 00:18:35.766 "state": "configuring", 00:18:35.766 "raid_level": "raid0", 00:18:35.766 "superblock": true, 00:18:35.766 "num_base_bdevs": 4, 00:18:35.766 "num_base_bdevs_discovered": 0, 00:18:35.766 "num_base_bdevs_operational": 4, 00:18:35.766 "base_bdevs_list": [ 00:18:35.766 { 00:18:35.766 "name": "BaseBdev1", 00:18:35.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.766 "is_configured": false, 00:18:35.766 "data_offset": 0, 00:18:35.766 "data_size": 0 00:18:35.766 }, 00:18:35.766 { 00:18:35.766 "name": "BaseBdev2", 00:18:35.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.766 "is_configured": false, 00:18:35.766 "data_offset": 0, 00:18:35.766 "data_size": 0 00:18:35.766 }, 00:18:35.766 { 00:18:35.767 "name": "BaseBdev3", 00:18:35.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.767 "is_configured": false, 00:18:35.767 "data_offset": 0, 00:18:35.767 "data_size": 0 00:18:35.767 }, 00:18:35.767 { 00:18:35.767 "name": "BaseBdev4", 00:18:35.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.767 "is_configured": false, 00:18:35.767 "data_offset": 0, 00:18:35.767 "data_size": 0 00:18:35.767 } 00:18:35.767 ] 00:18:35.767 }' 00:18:35.767 13:42:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.767 13:42:14 -- common/autotest_common.sh@10 -- # set +x 00:18:36.333 13:42:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:36.591 [2024-07-10 13:42:15.739947] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.591 [2024-07-10 13:42:15.740046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:36.591 13:42:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:36.591 [2024-07-10 13:42:15.915737] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:36.591 [2024-07-10 13:42:15.915848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:36.591 [2024-07-10 13:42:15.915870] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.591 [2024-07-10 13:42:15.915908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.591 [2024-07-10 13:42:15.915926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:36.591 [2024-07-10 13:42:15.915962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:36.591 [2024-07-10 13:42:15.915977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:36.591 [2024-07-10 13:42:15.916032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:36.591 13:42:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:36.850 [2024-07-10 13:42:16.130044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.850 BaseBdev1 00:18:36.850 13:42:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:36.850 13:42:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:36.850 13:42:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:36.850 13:42:16 -- common/autotest_common.sh@889 -- # local i 00:18:36.850 13:42:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:36.850 13:42:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:36.850 13:42:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.108 13:42:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:37.366 [ 00:18:37.366 { 00:18:37.366 "name": "BaseBdev1", 00:18:37.366 "aliases": [ 00:18:37.366 "0cc4fae6-7970-4f71-a25f-dcf9ceef28c8" 00:18:37.366 ], 00:18:37.366 "product_name": "Malloc disk", 00:18:37.366 "block_size": 512, 00:18:37.366 "num_blocks": 65536, 00:18:37.366 "uuid": "0cc4fae6-7970-4f71-a25f-dcf9ceef28c8", 00:18:37.366 "assigned_rate_limits": { 00:18:37.366 "rw_ios_per_sec": 0, 00:18:37.366 "rw_mbytes_per_sec": 0, 00:18:37.366 "r_mbytes_per_sec": 0, 00:18:37.366 "w_mbytes_per_sec": 0 00:18:37.366 }, 00:18:37.366 "claimed": true, 00:18:37.366 "claim_type": "exclusive_write", 00:18:37.366 "zoned": false, 00:18:37.366 "supported_io_types": { 00:18:37.366 "read": true, 00:18:37.366 "write": true, 00:18:37.366 "unmap": true, 00:18:37.366 "write_zeroes": true, 00:18:37.366 "flush": true, 00:18:37.366 "reset": true, 00:18:37.366 "compare": false, 00:18:37.366 "compare_and_write": false, 00:18:37.366 "abort": true, 00:18:37.366 "nvme_admin": false, 00:18:37.366 "nvme_io": false 00:18:37.366 }, 00:18:37.366 "memory_domains": [ 00:18:37.366 { 00:18:37.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.366 "dma_device_type": 2 00:18:37.366 } 00:18:37.366 ], 00:18:37.366 "driver_specific": {} 00:18:37.366 } 00:18:37.366 ] 00:18:37.366 13:42:16 -- common/autotest_common.sh@895 -- # return 0 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.366 13:42:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.366 "name": "Existed_Raid", 00:18:37.366 "uuid": "f81ce999-a236-4218-b94c-3f50882ebc7b", 00:18:37.366 "strip_size_kb": 64, 00:18:37.366 "state": "configuring", 00:18:37.366 "raid_level": "raid0", 00:18:37.366 "superblock": true, 00:18:37.366 "num_base_bdevs": 4, 00:18:37.366 "num_base_bdevs_discovered": 1, 00:18:37.366 "num_base_bdevs_operational": 4, 00:18:37.367 "base_bdevs_list": [ 00:18:37.367 { 00:18:37.367 "name": "BaseBdev1", 00:18:37.367 "uuid": "0cc4fae6-7970-4f71-a25f-dcf9ceef28c8", 00:18:37.367 "is_configured": true, 00:18:37.367 "data_offset": 2048, 00:18:37.367 "data_size": 63488 00:18:37.367 }, 00:18:37.367 { 00:18:37.367 "name": "BaseBdev2", 00:18:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.367 "is_configured": false, 00:18:37.367 "data_offset": 0, 00:18:37.367 "data_size": 0 00:18:37.367 }, 00:18:37.367 { 00:18:37.367 "name": "BaseBdev3", 00:18:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.367 "is_configured": false, 00:18:37.367 "data_offset": 0, 00:18:37.367 "data_size": 0 00:18:37.367 }, 00:18:37.367 { 00:18:37.367 "name": "BaseBdev4", 00:18:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.367 "is_configured": false, 00:18:37.367 "data_offset": 0, 00:18:37.367 "data_size": 0 00:18:37.367 } 00:18:37.367 ] 00:18:37.367 }' 00:18:37.367 13:42:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.367 13:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.935 13:42:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:38.194 [2024-07-10 13:42:17.411881] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.194 [2024-07-10 13:42:17.412006] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:38.194 13:42:17 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:38.194 13:42:17 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:38.453 13:42:17 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:38.713 BaseBdev1 00:18:38.713 13:42:17 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:38.713 13:42:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:38.713 13:42:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:38.713 13:42:17 -- common/autotest_common.sh@889 -- # local i 00:18:38.713 13:42:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:38.713 13:42:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:38.713 13:42:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.972 13:42:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:38.972 [ 00:18:38.972 { 00:18:38.972 "name": "BaseBdev1", 00:18:38.972 "aliases": [ 00:18:38.972 "b013e250-045a-4446-be49-efb017dcf516" 00:18:38.972 ], 00:18:38.972 "product_name": "Malloc disk", 00:18:38.972 "block_size": 512, 00:18:38.972 "num_blocks": 65536, 00:18:38.972 "uuid": "b013e250-045a-4446-be49-efb017dcf516", 00:18:38.972 "assigned_rate_limits": { 00:18:38.972 "rw_ios_per_sec": 0, 00:18:38.972 "rw_mbytes_per_sec": 0, 00:18:38.972 "r_mbytes_per_sec": 0, 00:18:38.972 "w_mbytes_per_sec": 0 00:18:38.972 }, 00:18:38.972 "claimed": false, 00:18:38.972 "zoned": false, 00:18:38.972 "supported_io_types": { 00:18:38.972 "read": true, 00:18:38.972 "write": true, 00:18:38.972 "unmap": true, 00:18:38.972 "write_zeroes": true, 00:18:38.972 "flush": true, 00:18:38.972 "reset": true, 00:18:38.972 "compare": false, 00:18:38.972 "compare_and_write": false, 00:18:38.972 "abort": true, 00:18:38.972 "nvme_admin": false, 00:18:38.972 "nvme_io": false 00:18:38.972 }, 00:18:38.972 "memory_domains": [ 00:18:38.972 { 00:18:38.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.972 "dma_device_type": 2 00:18:38.972 } 00:18:38.972 ], 00:18:38.972 "driver_specific": {} 00:18:38.972 } 00:18:38.972 ] 00:18:38.972 13:42:18 -- common/autotest_common.sh@895 -- # return 0 00:18:38.972 13:42:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:39.232 [2024-07-10 13:42:18.417340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.232 [2024-07-10 13:42:18.418974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.232 [2024-07-10 13:42:18.419066] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.232 [2024-07-10 13:42:18.419095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.232 [2024-07-10 13:42:18.419125] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.232 [2024-07-10 13:42:18.419148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:39.232 [2024-07-10 13:42:18.419170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.232 13:42:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.492 13:42:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.492 "name": "Existed_Raid", 00:18:39.492 "uuid": "fcb9afb5-3388-4b63-927d-b7c789098185", 00:18:39.492 "strip_size_kb": 64, 00:18:39.492 "state": "configuring", 00:18:39.492 "raid_level": "raid0", 00:18:39.492 "superblock": true, 00:18:39.492 "num_base_bdevs": 4, 00:18:39.492 "num_base_bdevs_discovered": 1, 00:18:39.492 "num_base_bdevs_operational": 4, 00:18:39.492 "base_bdevs_list": [ 00:18:39.492 { 00:18:39.492 "name": "BaseBdev1", 00:18:39.492 "uuid": "b013e250-045a-4446-be49-efb017dcf516", 00:18:39.492 "is_configured": true, 00:18:39.492 "data_offset": 2048, 00:18:39.492 "data_size": 63488 00:18:39.492 }, 00:18:39.492 { 00:18:39.492 "name": "BaseBdev2", 00:18:39.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.492 "is_configured": false, 00:18:39.492 "data_offset": 0, 00:18:39.492 "data_size": 0 00:18:39.492 }, 00:18:39.492 { 00:18:39.492 "name": "BaseBdev3", 00:18:39.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.492 "is_configured": false, 00:18:39.492 "data_offset": 0, 00:18:39.492 "data_size": 0 00:18:39.493 }, 00:18:39.493 { 00:18:39.493 "name": "BaseBdev4", 00:18:39.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.493 "is_configured": false, 00:18:39.493 "data_offset": 0, 00:18:39.493 "data_size": 0 00:18:39.493 } 00:18:39.493 ] 00:18:39.493 }' 00:18:39.493 13:42:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.493 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:18:40.060 13:42:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:40.319 [2024-07-10 13:42:19.425045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.319 BaseBdev2 00:18:40.319 13:42:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:40.319 13:42:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:40.319 13:42:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:40.319 13:42:19 -- common/autotest_common.sh@889 -- # local i 00:18:40.319 13:42:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:40.319 13:42:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:40.319 13:42:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:40.319 13:42:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:40.580 [ 00:18:40.580 { 00:18:40.580 "name": "BaseBdev2", 00:18:40.580 "aliases": [ 00:18:40.580 "65a39955-cbc1-4bbd-874b-d0ee6dc84d71" 00:18:40.580 ], 00:18:40.580 "product_name": "Malloc disk", 00:18:40.580 "block_size": 512, 00:18:40.580 "num_blocks": 65536, 00:18:40.580 "uuid": "65a39955-cbc1-4bbd-874b-d0ee6dc84d71", 00:18:40.580 "assigned_rate_limits": { 00:18:40.580 "rw_ios_per_sec": 0, 00:18:40.580 "rw_mbytes_per_sec": 0, 00:18:40.580 "r_mbytes_per_sec": 0, 00:18:40.580 "w_mbytes_per_sec": 0 00:18:40.580 }, 00:18:40.580 "claimed": true, 00:18:40.580 "claim_type": "exclusive_write", 00:18:40.580 "zoned": false, 00:18:40.580 "supported_io_types": { 00:18:40.580 "read": true, 00:18:40.580 "write": true, 00:18:40.580 "unmap": true, 00:18:40.580 "write_zeroes": true, 00:18:40.580 "flush": true, 00:18:40.580 "reset": true, 00:18:40.580 "compare": false, 00:18:40.580 "compare_and_write": false, 00:18:40.580 "abort": true, 00:18:40.580 "nvme_admin": false, 00:18:40.580 "nvme_io": false 00:18:40.580 }, 00:18:40.580 "memory_domains": [ 00:18:40.580 { 00:18:40.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.580 "dma_device_type": 2 00:18:40.580 } 00:18:40.580 ], 00:18:40.580 "driver_specific": {} 00:18:40.580 } 00:18:40.580 ] 00:18:40.580 13:42:19 -- common/autotest_common.sh@895 -- # return 0 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.580 13:42:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.839 13:42:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.839 "name": "Existed_Raid", 00:18:40.839 "uuid": "fcb9afb5-3388-4b63-927d-b7c789098185", 00:18:40.839 "strip_size_kb": 64, 00:18:40.839 "state": "configuring", 00:18:40.839 "raid_level": "raid0", 00:18:40.839 "superblock": true, 00:18:40.839 "num_base_bdevs": 4, 00:18:40.839 "num_base_bdevs_discovered": 2, 00:18:40.839 "num_base_bdevs_operational": 4, 00:18:40.839 "base_bdevs_list": [ 00:18:40.839 { 00:18:40.839 "name": "BaseBdev1", 00:18:40.839 "uuid": "b013e250-045a-4446-be49-efb017dcf516", 00:18:40.839 "is_configured": true, 00:18:40.839 "data_offset": 2048, 00:18:40.839 "data_size": 63488 00:18:40.839 }, 00:18:40.839 { 00:18:40.839 "name": "BaseBdev2", 00:18:40.839 "uuid": "65a39955-cbc1-4bbd-874b-d0ee6dc84d71", 00:18:40.839 "is_configured": true, 00:18:40.839 "data_offset": 2048, 00:18:40.839 "data_size": 63488 00:18:40.839 }, 00:18:40.839 { 00:18:40.839 "name": "BaseBdev3", 00:18:40.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.839 "is_configured": false, 00:18:40.839 "data_offset": 0, 00:18:40.839 "data_size": 0 00:18:40.839 }, 00:18:40.839 { 00:18:40.839 "name": "BaseBdev4", 00:18:40.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.839 "is_configured": false, 00:18:40.839 "data_offset": 0, 00:18:40.839 "data_size": 0 00:18:40.839 } 00:18:40.839 ] 00:18:40.839 }' 00:18:40.839 13:42:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.839 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:18:41.417 13:42:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:41.686 [2024-07-10 13:42:20.842980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:41.686 BaseBdev3 00:18:41.686 13:42:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:41.686 13:42:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:41.686 13:42:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:41.686 13:42:20 -- common/autotest_common.sh@889 -- # local i 00:18:41.686 13:42:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:41.686 13:42:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:41.686 13:42:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.686 13:42:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:41.946 [ 00:18:41.946 { 00:18:41.946 "name": "BaseBdev3", 00:18:41.946 "aliases": [ 00:18:41.946 "5a5a2ec5-5d77-4ad4-8c3b-5aaed41d710f" 00:18:41.946 ], 00:18:41.946 "product_name": "Malloc disk", 00:18:41.946 "block_size": 512, 00:18:41.946 "num_blocks": 65536, 00:18:41.946 "uuid": "5a5a2ec5-5d77-4ad4-8c3b-5aaed41d710f", 00:18:41.946 "assigned_rate_limits": { 00:18:41.946 "rw_ios_per_sec": 0, 00:18:41.946 "rw_mbytes_per_sec": 0, 00:18:41.946 "r_mbytes_per_sec": 0, 00:18:41.946 "w_mbytes_per_sec": 0 00:18:41.946 }, 00:18:41.946 "claimed": true, 00:18:41.946 "claim_type": "exclusive_write", 00:18:41.946 "zoned": false, 00:18:41.946 "supported_io_types": { 00:18:41.946 "read": true, 00:18:41.946 "write": true, 00:18:41.946 "unmap": true, 00:18:41.946 "write_zeroes": true, 00:18:41.946 "flush": true, 00:18:41.946 "reset": true, 00:18:41.946 "compare": false, 00:18:41.946 "compare_and_write": false, 00:18:41.946 "abort": true, 00:18:41.946 "nvme_admin": false, 00:18:41.946 "nvme_io": false 00:18:41.946 }, 00:18:41.946 "memory_domains": [ 00:18:41.946 { 00:18:41.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.946 "dma_device_type": 2 00:18:41.946 } 00:18:41.946 ], 00:18:41.946 "driver_specific": {} 00:18:41.946 } 00:18:41.946 ] 00:18:41.946 13:42:21 -- common/autotest_common.sh@895 -- # return 0 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.946 13:42:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.206 13:42:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.206 "name": "Existed_Raid", 00:18:42.206 "uuid": "fcb9afb5-3388-4b63-927d-b7c789098185", 00:18:42.206 "strip_size_kb": 64, 00:18:42.206 "state": "configuring", 00:18:42.206 "raid_level": "raid0", 00:18:42.206 "superblock": true, 00:18:42.206 "num_base_bdevs": 4, 00:18:42.206 "num_base_bdevs_discovered": 3, 00:18:42.207 "num_base_bdevs_operational": 4, 00:18:42.207 "base_bdevs_list": [ 00:18:42.207 { 00:18:42.207 "name": "BaseBdev1", 00:18:42.207 "uuid": "b013e250-045a-4446-be49-efb017dcf516", 00:18:42.207 "is_configured": true, 00:18:42.207 "data_offset": 2048, 00:18:42.207 "data_size": 63488 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "name": "BaseBdev2", 00:18:42.207 "uuid": "65a39955-cbc1-4bbd-874b-d0ee6dc84d71", 00:18:42.207 "is_configured": true, 00:18:42.207 "data_offset": 2048, 00:18:42.207 "data_size": 63488 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "name": "BaseBdev3", 00:18:42.207 "uuid": "5a5a2ec5-5d77-4ad4-8c3b-5aaed41d710f", 00:18:42.207 "is_configured": true, 00:18:42.207 "data_offset": 2048, 00:18:42.207 "data_size": 63488 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "name": "BaseBdev4", 00:18:42.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.207 "is_configured": false, 00:18:42.207 "data_offset": 0, 00:18:42.207 "data_size": 0 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }' 00:18:42.207 13:42:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.207 13:42:21 -- common/autotest_common.sh@10 -- # set +x 00:18:42.777 13:42:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.777 [2024-07-10 13:42:22.113346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.777 [2024-07-10 13:42:22.113642] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:42.777 [2024-07-10 13:42:22.113685] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:42.777 [2024-07-10 13:42:22.113812] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:42.777 BaseBdev4 00:18:42.777 [2024-07-10 13:42:22.114118] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:42.777 [2024-07-10 13:42:22.114128] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:42.777 [2024-07-10 13:42:22.114257] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.777 13:42:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:42.777 13:42:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:42.777 13:42:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:42.777 13:42:22 -- common/autotest_common.sh@889 -- # local i 00:18:42.777 13:42:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:42.777 13:42:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:42.777 13:42:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.037 13:42:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.296 [ 00:18:43.296 { 00:18:43.296 "name": "BaseBdev4", 00:18:43.296 "aliases": [ 00:18:43.296 "9dbf44ca-30f0-4d73-b201-65eab0ce7a7f" 00:18:43.296 ], 00:18:43.296 "product_name": "Malloc disk", 00:18:43.296 "block_size": 512, 00:18:43.296 "num_blocks": 65536, 00:18:43.296 "uuid": "9dbf44ca-30f0-4d73-b201-65eab0ce7a7f", 00:18:43.296 "assigned_rate_limits": { 00:18:43.296 "rw_ios_per_sec": 0, 00:18:43.296 "rw_mbytes_per_sec": 0, 00:18:43.296 "r_mbytes_per_sec": 0, 00:18:43.296 "w_mbytes_per_sec": 0 00:18:43.296 }, 00:18:43.296 "claimed": true, 00:18:43.296 "claim_type": "exclusive_write", 00:18:43.296 "zoned": false, 00:18:43.296 "supported_io_types": { 00:18:43.296 "read": true, 00:18:43.296 "write": true, 00:18:43.296 "unmap": true, 00:18:43.296 "write_zeroes": true, 00:18:43.296 "flush": true, 00:18:43.296 "reset": true, 00:18:43.296 "compare": false, 00:18:43.296 "compare_and_write": false, 00:18:43.296 "abort": true, 00:18:43.296 "nvme_admin": false, 00:18:43.296 "nvme_io": false 00:18:43.296 }, 00:18:43.296 "memory_domains": [ 00:18:43.296 { 00:18:43.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.296 "dma_device_type": 2 00:18:43.296 } 00:18:43.296 ], 00:18:43.296 "driver_specific": {} 00:18:43.296 } 00:18:43.296 ] 00:18:43.296 13:42:22 -- common/autotest_common.sh@895 -- # return 0 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.296 13:42:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.297 "name": "Existed_Raid", 00:18:43.297 "uuid": "fcb9afb5-3388-4b63-927d-b7c789098185", 00:18:43.297 "strip_size_kb": 64, 00:18:43.297 "state": "online", 00:18:43.297 "raid_level": "raid0", 00:18:43.297 "superblock": true, 00:18:43.297 "num_base_bdevs": 4, 00:18:43.297 "num_base_bdevs_discovered": 4, 00:18:43.297 "num_base_bdevs_operational": 4, 00:18:43.297 "base_bdevs_list": [ 00:18:43.297 { 00:18:43.297 "name": "BaseBdev1", 00:18:43.297 "uuid": "b013e250-045a-4446-be49-efb017dcf516", 00:18:43.297 "is_configured": true, 00:18:43.297 "data_offset": 2048, 00:18:43.297 "data_size": 63488 00:18:43.297 }, 00:18:43.297 { 00:18:43.297 "name": "BaseBdev2", 00:18:43.297 "uuid": "65a39955-cbc1-4bbd-874b-d0ee6dc84d71", 00:18:43.297 "is_configured": true, 00:18:43.297 "data_offset": 2048, 00:18:43.297 "data_size": 63488 00:18:43.297 }, 00:18:43.297 { 00:18:43.297 "name": "BaseBdev3", 00:18:43.297 "uuid": "5a5a2ec5-5d77-4ad4-8c3b-5aaed41d710f", 00:18:43.297 "is_configured": true, 00:18:43.297 "data_offset": 2048, 00:18:43.297 "data_size": 63488 00:18:43.297 }, 00:18:43.297 { 00:18:43.297 "name": "BaseBdev4", 00:18:43.297 "uuid": "9dbf44ca-30f0-4d73-b201-65eab0ce7a7f", 00:18:43.297 "is_configured": true, 00:18:43.297 "data_offset": 2048, 00:18:43.297 "data_size": 63488 00:18:43.297 } 00:18:43.297 ] 00:18:43.297 }' 00:18:43.297 13:42:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.297 13:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:44.272 [2024-07-10 13:42:23.435206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.272 [2024-07-10 13:42:23.435304] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.272 [2024-07-10 13:42:23.435402] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.272 13:42:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.531 13:42:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.531 "name": "Existed_Raid", 00:18:44.531 "uuid": "fcb9afb5-3388-4b63-927d-b7c789098185", 00:18:44.531 "strip_size_kb": 64, 00:18:44.531 "state": "offline", 00:18:44.531 "raid_level": "raid0", 00:18:44.531 "superblock": true, 00:18:44.531 "num_base_bdevs": 4, 00:18:44.531 "num_base_bdevs_discovered": 3, 00:18:44.531 "num_base_bdevs_operational": 3, 00:18:44.531 "base_bdevs_list": [ 00:18:44.531 { 00:18:44.531 "name": null, 00:18:44.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.531 "is_configured": false, 00:18:44.531 "data_offset": 2048, 00:18:44.531 "data_size": 63488 00:18:44.531 }, 00:18:44.531 { 00:18:44.531 "name": "BaseBdev2", 00:18:44.531 "uuid": "65a39955-cbc1-4bbd-874b-d0ee6dc84d71", 00:18:44.531 "is_configured": true, 00:18:44.531 "data_offset": 2048, 00:18:44.531 "data_size": 63488 00:18:44.531 }, 00:18:44.531 { 00:18:44.531 "name": "BaseBdev3", 00:18:44.531 "uuid": "5a5a2ec5-5d77-4ad4-8c3b-5aaed41d710f", 00:18:44.531 "is_configured": true, 00:18:44.531 "data_offset": 2048, 00:18:44.531 "data_size": 63488 00:18:44.531 }, 00:18:44.531 { 00:18:44.531 "name": "BaseBdev4", 00:18:44.531 "uuid": "9dbf44ca-30f0-4d73-b201-65eab0ce7a7f", 00:18:44.531 "is_configured": true, 00:18:44.531 "data_offset": 2048, 00:18:44.531 "data_size": 63488 00:18:44.531 } 00:18:44.531 ] 00:18:44.531 }' 00:18:44.531 13:42:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.531 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:18:45.099 13:42:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:45.099 13:42:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:45.099 13:42:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.099 13:42:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:45.359 13:42:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:45.359 13:42:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.359 13:42:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:45.617 [2024-07-10 13:42:24.784050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:45.617 13:42:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:45.617 13:42:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:45.617 13:42:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.617 13:42:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:45.876 13:42:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:45.876 13:42:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.876 13:42:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:45.876 [2024-07-10 13:42:25.211183] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:46.135 13:42:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:46.135 13:42:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:46.135 13:42:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.135 13:42:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:46.417 13:42:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:46.417 13:42:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:46.417 13:42:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:46.417 [2024-07-10 13:42:25.677696] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:46.417 [2024-07-10 13:42:25.677827] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:46.677 13:42:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:46.677 13:42:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:46.677 13:42:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:46.677 13:42:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.677 13:42:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:46.677 13:42:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:46.677 13:42:26 -- bdev/bdev_raid.sh@287 -- # killprocess 122091 00:18:46.677 13:42:26 -- common/autotest_common.sh@926 -- # '[' -z 122091 ']' 00:18:46.677 13:42:26 -- common/autotest_common.sh@930 -- # kill -0 122091 00:18:46.677 13:42:26 -- common/autotest_common.sh@931 -- # uname 00:18:46.677 13:42:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.677 13:42:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122091 00:18:46.937 13:42:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.937 13:42:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.937 13:42:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122091' 00:18:46.937 killing process with pid 122091 00:18:46.937 13:42:26 -- common/autotest_common.sh@945 -- # kill 122091 00:18:46.937 13:42:26 -- common/autotest_common.sh@950 -- # wait 122091 00:18:46.937 [2024-07-10 13:42:26.042251] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.937 [2024-07-10 13:42:26.042384] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.344 ************************************ 00:18:48.344 END TEST raid_state_function_test_sb 00:18:48.344 ************************************ 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:48.344 00:18:48.344 real 0m13.580s 00:18:48.344 user 0m23.761s 00:18:48.344 sys 0m1.587s 00:18:48.344 13:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.344 13:42:27 -- common/autotest_common.sh@10 -- # set +x 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:48.344 13:42:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:48.344 13:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:48.344 13:42:27 -- common/autotest_common.sh@10 -- # set +x 00:18:48.344 ************************************ 00:18:48.344 START TEST raid_superblock_test 00:18:48.344 ************************************ 00:18:48.344 13:42:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:48.344 13:42:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=122557 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:48.345 13:42:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122557 /var/tmp/spdk-raid.sock 00:18:48.345 13:42:27 -- common/autotest_common.sh@819 -- # '[' -z 122557 ']' 00:18:48.345 13:42:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:48.345 13:42:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:48.345 13:42:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:48.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:48.345 13:42:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:48.345 13:42:27 -- common/autotest_common.sh@10 -- # set +x 00:18:48.345 [2024-07-10 13:42:27.462026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:48.345 [2024-07-10 13:42:27.462246] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122557 ] 00:18:48.345 [2024-07-10 13:42:27.617765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.604 [2024-07-10 13:42:27.815493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.863 [2024-07-10 13:42:28.039984] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.121 13:42:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:49.121 13:42:28 -- common/autotest_common.sh@852 -- # return 0 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:49.121 13:42:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:49.379 malloc1 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:49.379 [2024-07-10 13:42:28.698938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:49.379 [2024-07-10 13:42:28.699071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.379 [2024-07-10 13:42:28.699113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:49.379 [2024-07-10 13:42:28.699175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.379 [2024-07-10 13:42:28.701375] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.379 [2024-07-10 13:42:28.701453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:49.379 pt1 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:49.379 13:42:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:49.637 malloc2 00:18:49.638 13:42:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:49.904 [2024-07-10 13:42:29.132205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:49.904 [2024-07-10 13:42:29.132376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.904 [2024-07-10 13:42:29.132430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:49.904 [2024-07-10 13:42:29.132508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.904 [2024-07-10 13:42:29.134619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.904 [2024-07-10 13:42:29.134721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:49.904 pt2 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:49.904 13:42:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:50.161 malloc3 00:18:50.161 13:42:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:50.419 [2024-07-10 13:42:29.520089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:50.419 [2024-07-10 13:42:29.520239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.420 [2024-07-10 13:42:29.520288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:50.420 [2024-07-10 13:42:29.520338] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.420 [2024-07-10 13:42:29.522279] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.420 [2024-07-10 13:42:29.522361] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:50.420 pt3 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:50.420 malloc4 00:18:50.420 13:42:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:50.678 [2024-07-10 13:42:29.935685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:50.678 [2024-07-10 13:42:29.935851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.678 [2024-07-10 13:42:29.935915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:50.678 [2024-07-10 13:42:29.936016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.678 [2024-07-10 13:42:29.938126] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.678 [2024-07-10 13:42:29.938205] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:50.678 pt4 00:18:50.678 13:42:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:50.678 13:42:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:50.678 13:42:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:50.937 [2024-07-10 13:42:30.143415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:50.937 [2024-07-10 13:42:30.145287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.937 [2024-07-10 13:42:30.145391] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:50.937 [2024-07-10 13:42:30.145474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:50.937 [2024-07-10 13:42:30.145697] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:50.937 [2024-07-10 13:42:30.145735] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:50.937 [2024-07-10 13:42:30.145894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:50.937 [2024-07-10 13:42:30.146271] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:50.937 [2024-07-10 13:42:30.146314] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:50.937 [2024-07-10 13:42:30.146490] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.937 13:42:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.195 13:42:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.195 "name": "raid_bdev1", 00:18:51.195 "uuid": "5d4a301d-9a42-44a5-a742-2182d71c519e", 00:18:51.195 "strip_size_kb": 64, 00:18:51.195 "state": "online", 00:18:51.195 "raid_level": "raid0", 00:18:51.195 "superblock": true, 00:18:51.195 "num_base_bdevs": 4, 00:18:51.195 "num_base_bdevs_discovered": 4, 00:18:51.195 "num_base_bdevs_operational": 4, 00:18:51.195 "base_bdevs_list": [ 00:18:51.195 { 00:18:51.195 "name": "pt1", 00:18:51.195 "uuid": "485bfbfd-ff6c-50e5-b7e9-09e3d29385d0", 00:18:51.195 "is_configured": true, 00:18:51.195 "data_offset": 2048, 00:18:51.195 "data_size": 63488 00:18:51.196 }, 00:18:51.196 { 00:18:51.196 "name": "pt2", 00:18:51.196 "uuid": "cf2a13fe-e5a0-5c30-8ad3-847c2756a7d6", 00:18:51.196 "is_configured": true, 00:18:51.196 "data_offset": 2048, 00:18:51.196 "data_size": 63488 00:18:51.196 }, 00:18:51.196 { 00:18:51.196 "name": "pt3", 00:18:51.196 "uuid": "47422584-57cd-596a-9ff3-0a6c08748470", 00:18:51.196 "is_configured": true, 00:18:51.196 "data_offset": 2048, 00:18:51.196 "data_size": 63488 00:18:51.196 }, 00:18:51.196 { 00:18:51.196 "name": "pt4", 00:18:51.196 "uuid": "3914cad1-353b-5018-a56e-d53d3c8baa9e", 00:18:51.196 "is_configured": true, 00:18:51.196 "data_offset": 2048, 00:18:51.196 "data_size": 63488 00:18:51.196 } 00:18:51.196 ] 00:18:51.196 }' 00:18:51.196 13:42:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.196 13:42:30 -- common/autotest_common.sh@10 -- # set +x 00:18:51.761 13:42:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:51.761 13:42:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:51.761 [2024-07-10 13:42:31.101873] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.019 13:42:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5d4a301d-9a42-44a5-a742-2182d71c519e 00:18:52.019 13:42:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 5d4a301d-9a42-44a5-a742-2182d71c519e ']' 00:18:52.019 13:42:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.019 [2024-07-10 13:42:31.289348] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.019 [2024-07-10 13:42:31.289435] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.019 [2024-07-10 13:42:31.289537] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.019 [2024-07-10 13:42:31.289612] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.019 [2024-07-10 13:42:31.289629] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:52.019 13:42:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.019 13:42:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:52.277 13:42:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:52.277 13:42:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:52.277 13:42:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.277 13:42:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:52.534 13:42:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.534 13:42:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:52.534 13:42:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.534 13:42:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:52.792 13:42:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:52.792 13:42:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:53.049 13:42:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:53.049 13:42:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:53.307 13:42:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:53.307 13:42:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:53.307 13:42:32 -- common/autotest_common.sh@640 -- # local es=0 00:18:53.307 13:42:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:53.307 13:42:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.307 13:42:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:53.307 13:42:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.307 13:42:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:53.307 13:42:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.307 13:42:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:53.307 13:42:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.307 13:42:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:53.307 13:42:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:53.307 [2024-07-10 13:42:32.647093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:53.307 [2024-07-10 13:42:32.649102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:53.307 [2024-07-10 13:42:32.649191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:53.307 [2024-07-10 13:42:32.649255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:53.308 [2024-07-10 13:42:32.649327] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:53.308 [2024-07-10 13:42:32.649417] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:53.308 [2024-07-10 13:42:32.649470] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:53.308 [2024-07-10 13:42:32.649549] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:53.308 [2024-07-10 13:42:32.649594] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.308 [2024-07-10 13:42:32.649624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:53.308 request: 00:18:53.308 { 00:18:53.308 "name": "raid_bdev1", 00:18:53.308 "raid_level": "raid0", 00:18:53.308 "base_bdevs": [ 00:18:53.308 "malloc1", 00:18:53.308 "malloc2", 00:18:53.308 "malloc3", 00:18:53.308 "malloc4" 00:18:53.308 ], 00:18:53.308 "superblock": false, 00:18:53.308 "strip_size_kb": 64, 00:18:53.308 "method": "bdev_raid_create", 00:18:53.308 "req_id": 1 00:18:53.308 } 00:18:53.308 Got JSON-RPC error response 00:18:53.308 response: 00:18:53.308 { 00:18:53.308 "code": -17, 00:18:53.308 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:53.308 } 00:18:53.566 13:42:32 -- common/autotest_common.sh@643 -- # es=1 00:18:53.566 13:42:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:53.566 13:42:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:53.566 13:42:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:53.566 13:42:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.566 13:42:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:53.566 13:42:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:53.566 13:42:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:53.566 13:42:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.823 [2024-07-10 13:42:33.042376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.823 [2024-07-10 13:42:33.042533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.823 [2024-07-10 13:42:33.042579] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:53.823 [2024-07-10 13:42:33.042621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.823 [2024-07-10 13:42:33.044675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.823 [2024-07-10 13:42:33.044774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.823 [2024-07-10 13:42:33.044897] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:53.823 [2024-07-10 13:42:33.045014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:53.823 pt1 00:18:53.823 13:42:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:53.823 13:42:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:53.823 13:42:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.823 13:42:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:53.823 13:42:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.823 13:42:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.824 13:42:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.824 13:42:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.824 13:42:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.824 13:42:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.824 13:42:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.824 13:42:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.081 13:42:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.081 "name": "raid_bdev1", 00:18:54.081 "uuid": "5d4a301d-9a42-44a5-a742-2182d71c519e", 00:18:54.081 "strip_size_kb": 64, 00:18:54.081 "state": "configuring", 00:18:54.081 "raid_level": "raid0", 00:18:54.081 "superblock": true, 00:18:54.081 "num_base_bdevs": 4, 00:18:54.081 "num_base_bdevs_discovered": 1, 00:18:54.081 "num_base_bdevs_operational": 4, 00:18:54.081 "base_bdevs_list": [ 00:18:54.081 { 00:18:54.081 "name": "pt1", 00:18:54.081 "uuid": "485bfbfd-ff6c-50e5-b7e9-09e3d29385d0", 00:18:54.081 "is_configured": true, 00:18:54.081 "data_offset": 2048, 00:18:54.081 "data_size": 63488 00:18:54.081 }, 00:18:54.081 { 00:18:54.081 "name": null, 00:18:54.081 "uuid": "cf2a13fe-e5a0-5c30-8ad3-847c2756a7d6", 00:18:54.081 "is_configured": false, 00:18:54.081 "data_offset": 2048, 00:18:54.081 "data_size": 63488 00:18:54.081 }, 00:18:54.081 { 00:18:54.081 "name": null, 00:18:54.081 "uuid": "47422584-57cd-596a-9ff3-0a6c08748470", 00:18:54.081 "is_configured": false, 00:18:54.081 "data_offset": 2048, 00:18:54.081 "data_size": 63488 00:18:54.081 }, 00:18:54.081 { 00:18:54.081 "name": null, 00:18:54.081 "uuid": "3914cad1-353b-5018-a56e-d53d3c8baa9e", 00:18:54.081 "is_configured": false, 00:18:54.081 "data_offset": 2048, 00:18:54.081 "data_size": 63488 00:18:54.081 } 00:18:54.081 ] 00:18:54.081 }' 00:18:54.081 13:42:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.081 13:42:33 -- common/autotest_common.sh@10 -- # set +x 00:18:54.646 13:42:33 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:54.646 13:42:33 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:54.646 [2024-07-10 13:42:33.984788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:54.646 [2024-07-10 13:42:33.984946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.646 [2024-07-10 13:42:33.985000] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:54.646 [2024-07-10 13:42:33.985038] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.646 [2024-07-10 13:42:33.985496] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.646 [2024-07-10 13:42:33.985573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:54.646 [2024-07-10 13:42:33.985700] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:54.646 [2024-07-10 13:42:33.985749] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:54.646 pt2 00:18:54.646 13:42:33 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:54.904 [2024-07-10 13:42:34.184472] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.904 13:42:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.163 13:42:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.163 "name": "raid_bdev1", 00:18:55.163 "uuid": "5d4a301d-9a42-44a5-a742-2182d71c519e", 00:18:55.163 "strip_size_kb": 64, 00:18:55.163 "state": "configuring", 00:18:55.163 "raid_level": "raid0", 00:18:55.163 "superblock": true, 00:18:55.163 "num_base_bdevs": 4, 00:18:55.163 "num_base_bdevs_discovered": 1, 00:18:55.163 "num_base_bdevs_operational": 4, 00:18:55.163 "base_bdevs_list": [ 00:18:55.163 { 00:18:55.163 "name": "pt1", 00:18:55.163 "uuid": "485bfbfd-ff6c-50e5-b7e9-09e3d29385d0", 00:18:55.163 "is_configured": true, 00:18:55.163 "data_offset": 2048, 00:18:55.163 "data_size": 63488 00:18:55.163 }, 00:18:55.163 { 00:18:55.163 "name": null, 00:18:55.163 "uuid": "cf2a13fe-e5a0-5c30-8ad3-847c2756a7d6", 00:18:55.163 "is_configured": false, 00:18:55.163 "data_offset": 2048, 00:18:55.163 "data_size": 63488 00:18:55.163 }, 00:18:55.163 { 00:18:55.163 "name": null, 00:18:55.163 "uuid": "47422584-57cd-596a-9ff3-0a6c08748470", 00:18:55.163 "is_configured": false, 00:18:55.163 "data_offset": 2048, 00:18:55.163 "data_size": 63488 00:18:55.163 }, 00:18:55.163 { 00:18:55.163 "name": null, 00:18:55.163 "uuid": "3914cad1-353b-5018-a56e-d53d3c8baa9e", 00:18:55.163 "is_configured": false, 00:18:55.163 "data_offset": 2048, 00:18:55.163 "data_size": 63488 00:18:55.163 } 00:18:55.163 ] 00:18:55.163 }' 00:18:55.163 13:42:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.163 13:42:34 -- common/autotest_common.sh@10 -- # set +x 00:18:55.764 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:55.764 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:55.764 13:42:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.024 [2024-07-10 13:42:35.214758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.024 [2024-07-10 13:42:35.214941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.024 [2024-07-10 13:42:35.214991] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:56.024 [2024-07-10 13:42:35.215029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.024 [2024-07-10 13:42:35.215519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.024 [2024-07-10 13:42:35.215605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.024 [2024-07-10 13:42:35.215736] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:56.024 [2024-07-10 13:42:35.215780] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.024 pt2 00:18:56.024 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:56.024 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.024 13:42:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:56.284 [2024-07-10 13:42:35.402436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:56.284 [2024-07-10 13:42:35.402614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.284 [2024-07-10 13:42:35.402676] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:56.284 [2024-07-10 13:42:35.402721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.284 [2024-07-10 13:42:35.403211] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.284 [2024-07-10 13:42:35.403309] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:56.284 [2024-07-10 13:42:35.403448] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:56.284 [2024-07-10 13:42:35.403496] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:56.284 pt3 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:56.284 [2024-07-10 13:42:35.586116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:56.284 [2024-07-10 13:42:35.586276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.284 [2024-07-10 13:42:35.586330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:56.284 [2024-07-10 13:42:35.586392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.284 [2024-07-10 13:42:35.586858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.284 [2024-07-10 13:42:35.586938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:56.284 [2024-07-10 13:42:35.587091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:56.284 [2024-07-10 13:42:35.587141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:56.284 [2024-07-10 13:42:35.587295] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:56.284 [2024-07-10 13:42:35.587329] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:56.284 [2024-07-10 13:42:35.587453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:56.284 [2024-07-10 13:42:35.587779] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:56.284 [2024-07-10 13:42:35.587823] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:56.284 [2024-07-10 13:42:35.587989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.284 pt4 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.284 13:42:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.543 13:42:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.543 "name": "raid_bdev1", 00:18:56.543 "uuid": "5d4a301d-9a42-44a5-a742-2182d71c519e", 00:18:56.543 "strip_size_kb": 64, 00:18:56.543 "state": "online", 00:18:56.543 "raid_level": "raid0", 00:18:56.543 "superblock": true, 00:18:56.543 "num_base_bdevs": 4, 00:18:56.543 "num_base_bdevs_discovered": 4, 00:18:56.543 "num_base_bdevs_operational": 4, 00:18:56.543 "base_bdevs_list": [ 00:18:56.543 { 00:18:56.543 "name": "pt1", 00:18:56.543 "uuid": "485bfbfd-ff6c-50e5-b7e9-09e3d29385d0", 00:18:56.543 "is_configured": true, 00:18:56.543 "data_offset": 2048, 00:18:56.543 "data_size": 63488 00:18:56.543 }, 00:18:56.543 { 00:18:56.543 "name": "pt2", 00:18:56.543 "uuid": "cf2a13fe-e5a0-5c30-8ad3-847c2756a7d6", 00:18:56.543 "is_configured": true, 00:18:56.543 "data_offset": 2048, 00:18:56.543 "data_size": 63488 00:18:56.543 }, 00:18:56.543 { 00:18:56.543 "name": "pt3", 00:18:56.543 "uuid": "47422584-57cd-596a-9ff3-0a6c08748470", 00:18:56.543 "is_configured": true, 00:18:56.543 "data_offset": 2048, 00:18:56.543 "data_size": 63488 00:18:56.543 }, 00:18:56.543 { 00:18:56.543 "name": "pt4", 00:18:56.543 "uuid": "3914cad1-353b-5018-a56e-d53d3c8baa9e", 00:18:56.543 "is_configured": true, 00:18:56.543 "data_offset": 2048, 00:18:56.543 "data_size": 63488 00:18:56.543 } 00:18:56.543 ] 00:18:56.543 }' 00:18:56.543 13:42:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.543 13:42:35 -- common/autotest_common.sh@10 -- # set +x 00:18:57.112 13:42:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:57.112 13:42:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:57.371 [2024-07-10 13:42:36.612565] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.371 13:42:36 -- bdev/bdev_raid.sh@430 -- # '[' 5d4a301d-9a42-44a5-a742-2182d71c519e '!=' 5d4a301d-9a42-44a5-a742-2182d71c519e ']' 00:18:57.371 13:42:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:57.371 13:42:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:57.371 13:42:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:57.371 13:42:36 -- bdev/bdev_raid.sh@511 -- # killprocess 122557 00:18:57.371 13:42:36 -- common/autotest_common.sh@926 -- # '[' -z 122557 ']' 00:18:57.371 13:42:36 -- common/autotest_common.sh@930 -- # kill -0 122557 00:18:57.371 13:42:36 -- common/autotest_common.sh@931 -- # uname 00:18:57.371 13:42:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.371 13:42:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122557 00:18:57.371 killing process with pid 122557 00:18:57.371 13:42:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:57.371 13:42:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:57.372 13:42:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122557' 00:18:57.372 13:42:36 -- common/autotest_common.sh@945 -- # kill 122557 00:18:57.372 13:42:36 -- common/autotest_common.sh@950 -- # wait 122557 00:18:57.372 [2024-07-10 13:42:36.650111] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.372 [2024-07-10 13:42:36.650191] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.372 [2024-07-10 13:42:36.650258] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.372 [2024-07-10 13:42:36.650308] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:57.941 [2024-07-10 13:42:37.046434] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.321 ************************************ 00:18:59.321 END TEST raid_superblock_test 00:18:59.321 ************************************ 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:59.321 00:18:59.321 real 0m10.920s 00:18:59.321 user 0m18.585s 00:18:59.321 sys 0m1.280s 00:18:59.321 13:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.321 13:42:38 -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:59.321 13:42:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:59.321 13:42:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.321 13:42:38 -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 ************************************ 00:18:59.321 START TEST raid_state_function_test 00:18:59.321 ************************************ 00:18:59.321 13:42:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=122892 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122892' 00:18:59.321 Process raid pid: 122892 00:18:59.321 13:42:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122892 /var/tmp/spdk-raid.sock 00:18:59.321 13:42:38 -- common/autotest_common.sh@819 -- # '[' -z 122892 ']' 00:18:59.321 13:42:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.321 13:42:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.321 13:42:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.321 13:42:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.321 13:42:38 -- common/autotest_common.sh@10 -- # set +x 00:18:59.321 [2024-07-10 13:42:38.453416] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:59.321 [2024-07-10 13:42:38.453635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.321 [2024-07-10 13:42:38.612963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.581 [2024-07-10 13:42:38.814460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.840 [2024-07-10 13:42:39.015142] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.100 13:42:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.100 13:42:39 -- common/autotest_common.sh@852 -- # return 0 00:19:00.100 13:42:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:00.100 [2024-07-10 13:42:39.439658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.100 [2024-07-10 13:42:39.439815] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.100 [2024-07-10 13:42:39.439847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.100 [2024-07-10 13:42:39.439876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.100 [2024-07-10 13:42:39.439891] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.100 [2024-07-10 13:42:39.440016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.100 [2024-07-10 13:42:39.440036] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:00.100 [2024-07-10 13:42:39.440095] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.381 13:42:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.382 13:42:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.382 13:42:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.382 13:42:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.382 13:42:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.382 13:42:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.382 "name": "Existed_Raid", 00:19:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.382 "strip_size_kb": 64, 00:19:00.382 "state": "configuring", 00:19:00.382 "raid_level": "concat", 00:19:00.382 "superblock": false, 00:19:00.382 "num_base_bdevs": 4, 00:19:00.382 "num_base_bdevs_discovered": 0, 00:19:00.382 "num_base_bdevs_operational": 4, 00:19:00.382 "base_bdevs_list": [ 00:19:00.382 { 00:19:00.382 "name": "BaseBdev1", 00:19:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.382 "is_configured": false, 00:19:00.382 "data_offset": 0, 00:19:00.382 "data_size": 0 00:19:00.382 }, 00:19:00.382 { 00:19:00.382 "name": "BaseBdev2", 00:19:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.382 "is_configured": false, 00:19:00.382 "data_offset": 0, 00:19:00.382 "data_size": 0 00:19:00.382 }, 00:19:00.382 { 00:19:00.382 "name": "BaseBdev3", 00:19:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.382 "is_configured": false, 00:19:00.382 "data_offset": 0, 00:19:00.382 "data_size": 0 00:19:00.382 }, 00:19:00.382 { 00:19:00.382 "name": "BaseBdev4", 00:19:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.382 "is_configured": false, 00:19:00.382 "data_offset": 0, 00:19:00.382 "data_size": 0 00:19:00.382 } 00:19:00.382 ] 00:19:00.382 }' 00:19:00.382 13:42:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.382 13:42:39 -- common/autotest_common.sh@10 -- # set +x 00:19:01.318 13:42:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.318 [2024-07-10 13:42:40.497744] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.318 [2024-07-10 13:42:40.497843] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:01.318 13:42:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:01.577 [2024-07-10 13:42:40.697469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.577 [2024-07-10 13:42:40.697620] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.577 [2024-07-10 13:42:40.697652] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.577 [2024-07-10 13:42:40.697695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.577 [2024-07-10 13:42:40.697715] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.577 [2024-07-10 13:42:40.697757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.577 [2024-07-10 13:42:40.697819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:01.577 [2024-07-10 13:42:40.697852] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:01.577 13:42:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:01.837 [2024-07-10 13:42:40.935485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.837 BaseBdev1 00:19:01.837 13:42:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:01.837 13:42:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:01.837 13:42:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.837 13:42:40 -- common/autotest_common.sh@889 -- # local i 00:19:01.837 13:42:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.837 13:42:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.837 13:42:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.837 13:42:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:02.097 [ 00:19:02.097 { 00:19:02.097 "name": "BaseBdev1", 00:19:02.097 "aliases": [ 00:19:02.097 "2c9013e2-76a7-4852-af40-995d5de7b668" 00:19:02.097 ], 00:19:02.097 "product_name": "Malloc disk", 00:19:02.097 "block_size": 512, 00:19:02.097 "num_blocks": 65536, 00:19:02.097 "uuid": "2c9013e2-76a7-4852-af40-995d5de7b668", 00:19:02.097 "assigned_rate_limits": { 00:19:02.097 "rw_ios_per_sec": 0, 00:19:02.097 "rw_mbytes_per_sec": 0, 00:19:02.097 "r_mbytes_per_sec": 0, 00:19:02.097 "w_mbytes_per_sec": 0 00:19:02.097 }, 00:19:02.097 "claimed": true, 00:19:02.097 "claim_type": "exclusive_write", 00:19:02.097 "zoned": false, 00:19:02.097 "supported_io_types": { 00:19:02.097 "read": true, 00:19:02.097 "write": true, 00:19:02.097 "unmap": true, 00:19:02.097 "write_zeroes": true, 00:19:02.097 "flush": true, 00:19:02.097 "reset": true, 00:19:02.097 "compare": false, 00:19:02.097 "compare_and_write": false, 00:19:02.097 "abort": true, 00:19:02.097 "nvme_admin": false, 00:19:02.097 "nvme_io": false 00:19:02.097 }, 00:19:02.097 "memory_domains": [ 00:19:02.097 { 00:19:02.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.097 "dma_device_type": 2 00:19:02.097 } 00:19:02.097 ], 00:19:02.097 "driver_specific": {} 00:19:02.097 } 00:19:02.097 ] 00:19:02.097 13:42:41 -- common/autotest_common.sh@895 -- # return 0 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.097 13:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.357 13:42:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.357 "name": "Existed_Raid", 00:19:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.357 "strip_size_kb": 64, 00:19:02.357 "state": "configuring", 00:19:02.357 "raid_level": "concat", 00:19:02.357 "superblock": false, 00:19:02.357 "num_base_bdevs": 4, 00:19:02.357 "num_base_bdevs_discovered": 1, 00:19:02.357 "num_base_bdevs_operational": 4, 00:19:02.357 "base_bdevs_list": [ 00:19:02.357 { 00:19:02.357 "name": "BaseBdev1", 00:19:02.357 "uuid": "2c9013e2-76a7-4852-af40-995d5de7b668", 00:19:02.357 "is_configured": true, 00:19:02.357 "data_offset": 0, 00:19:02.357 "data_size": 65536 00:19:02.357 }, 00:19:02.357 { 00:19:02.357 "name": "BaseBdev2", 00:19:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.357 "is_configured": false, 00:19:02.357 "data_offset": 0, 00:19:02.357 "data_size": 0 00:19:02.357 }, 00:19:02.357 { 00:19:02.357 "name": "BaseBdev3", 00:19:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.357 "is_configured": false, 00:19:02.357 "data_offset": 0, 00:19:02.357 "data_size": 0 00:19:02.357 }, 00:19:02.357 { 00:19:02.357 "name": "BaseBdev4", 00:19:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.357 "is_configured": false, 00:19:02.357 "data_offset": 0, 00:19:02.357 "data_size": 0 00:19:02.357 } 00:19:02.357 ] 00:19:02.357 }' 00:19:02.357 13:42:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.357 13:42:41 -- common/autotest_common.sh@10 -- # set +x 00:19:02.926 13:42:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.186 [2024-07-10 13:42:42.337182] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.186 [2024-07-10 13:42:42.337330] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:03.186 13:42:42 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:03.186 13:42:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:03.186 [2024-07-10 13:42:42.532914] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.186 [2024-07-10 13:42:42.534651] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.186 [2024-07-10 13:42:42.534762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.186 [2024-07-10 13:42:42.534805] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:03.186 [2024-07-10 13:42:42.534834] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:03.186 [2024-07-10 13:42:42.534851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:03.186 [2024-07-10 13:42:42.534871] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.446 "name": "Existed_Raid", 00:19:03.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.446 "strip_size_kb": 64, 00:19:03.446 "state": "configuring", 00:19:03.446 "raid_level": "concat", 00:19:03.446 "superblock": false, 00:19:03.446 "num_base_bdevs": 4, 00:19:03.446 "num_base_bdevs_discovered": 1, 00:19:03.446 "num_base_bdevs_operational": 4, 00:19:03.446 "base_bdevs_list": [ 00:19:03.446 { 00:19:03.446 "name": "BaseBdev1", 00:19:03.446 "uuid": "2c9013e2-76a7-4852-af40-995d5de7b668", 00:19:03.446 "is_configured": true, 00:19:03.446 "data_offset": 0, 00:19:03.446 "data_size": 65536 00:19:03.446 }, 00:19:03.446 { 00:19:03.446 "name": "BaseBdev2", 00:19:03.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.446 "is_configured": false, 00:19:03.446 "data_offset": 0, 00:19:03.446 "data_size": 0 00:19:03.446 }, 00:19:03.446 { 00:19:03.446 "name": "BaseBdev3", 00:19:03.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.446 "is_configured": false, 00:19:03.446 "data_offset": 0, 00:19:03.446 "data_size": 0 00:19:03.446 }, 00:19:03.446 { 00:19:03.446 "name": "BaseBdev4", 00:19:03.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.446 "is_configured": false, 00:19:03.446 "data_offset": 0, 00:19:03.446 "data_size": 0 00:19:03.446 } 00:19:03.446 ] 00:19:03.446 }' 00:19:03.446 13:42:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.446 13:42:42 -- common/autotest_common.sh@10 -- # set +x 00:19:04.018 13:42:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:04.277 [2024-07-10 13:42:43.576822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.277 BaseBdev2 00:19:04.277 13:42:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:04.277 13:42:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:04.277 13:42:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:04.278 13:42:43 -- common/autotest_common.sh@889 -- # local i 00:19:04.278 13:42:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:04.278 13:42:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:04.278 13:42:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.537 13:42:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:04.796 [ 00:19:04.796 { 00:19:04.796 "name": "BaseBdev2", 00:19:04.796 "aliases": [ 00:19:04.796 "634d7519-1c78-45f8-9976-11022ddbc753" 00:19:04.796 ], 00:19:04.796 "product_name": "Malloc disk", 00:19:04.796 "block_size": 512, 00:19:04.796 "num_blocks": 65536, 00:19:04.796 "uuid": "634d7519-1c78-45f8-9976-11022ddbc753", 00:19:04.796 "assigned_rate_limits": { 00:19:04.796 "rw_ios_per_sec": 0, 00:19:04.796 "rw_mbytes_per_sec": 0, 00:19:04.796 "r_mbytes_per_sec": 0, 00:19:04.796 "w_mbytes_per_sec": 0 00:19:04.796 }, 00:19:04.796 "claimed": true, 00:19:04.796 "claim_type": "exclusive_write", 00:19:04.796 "zoned": false, 00:19:04.796 "supported_io_types": { 00:19:04.796 "read": true, 00:19:04.796 "write": true, 00:19:04.796 "unmap": true, 00:19:04.796 "write_zeroes": true, 00:19:04.796 "flush": true, 00:19:04.796 "reset": true, 00:19:04.796 "compare": false, 00:19:04.796 "compare_and_write": false, 00:19:04.796 "abort": true, 00:19:04.796 "nvme_admin": false, 00:19:04.796 "nvme_io": false 00:19:04.796 }, 00:19:04.796 "memory_domains": [ 00:19:04.796 { 00:19:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.796 "dma_device_type": 2 00:19:04.796 } 00:19:04.796 ], 00:19:04.796 "driver_specific": {} 00:19:04.796 } 00:19:04.796 ] 00:19:04.796 13:42:43 -- common/autotest_common.sh@895 -- # return 0 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.796 13:42:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.796 13:42:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.796 "name": "Existed_Raid", 00:19:04.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.796 "strip_size_kb": 64, 00:19:04.796 "state": "configuring", 00:19:04.796 "raid_level": "concat", 00:19:04.796 "superblock": false, 00:19:04.796 "num_base_bdevs": 4, 00:19:04.796 "num_base_bdevs_discovered": 2, 00:19:04.796 "num_base_bdevs_operational": 4, 00:19:04.796 "base_bdevs_list": [ 00:19:04.796 { 00:19:04.796 "name": "BaseBdev1", 00:19:04.796 "uuid": "2c9013e2-76a7-4852-af40-995d5de7b668", 00:19:04.796 "is_configured": true, 00:19:04.796 "data_offset": 0, 00:19:04.796 "data_size": 65536 00:19:04.796 }, 00:19:04.796 { 00:19:04.796 "name": "BaseBdev2", 00:19:04.796 "uuid": "634d7519-1c78-45f8-9976-11022ddbc753", 00:19:04.796 "is_configured": true, 00:19:04.796 "data_offset": 0, 00:19:04.796 "data_size": 65536 00:19:04.796 }, 00:19:04.796 { 00:19:04.796 "name": "BaseBdev3", 00:19:04.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.796 "is_configured": false, 00:19:04.796 "data_offset": 0, 00:19:04.796 "data_size": 0 00:19:04.796 }, 00:19:04.796 { 00:19:04.796 "name": "BaseBdev4", 00:19:04.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.796 "is_configured": false, 00:19:04.796 "data_offset": 0, 00:19:04.796 "data_size": 0 00:19:04.796 } 00:19:04.796 ] 00:19:04.796 }' 00:19:04.796 13:42:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.796 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:19:05.732 13:42:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:05.732 [2024-07-10 13:42:44.954061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:05.732 BaseBdev3 00:19:05.732 13:42:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:05.732 13:42:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:05.732 13:42:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:05.732 13:42:44 -- common/autotest_common.sh@889 -- # local i 00:19:05.732 13:42:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:05.732 13:42:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:05.732 13:42:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.990 13:42:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:05.990 [ 00:19:05.990 { 00:19:05.990 "name": "BaseBdev3", 00:19:05.990 "aliases": [ 00:19:05.990 "3ec73f0a-20e2-4bdf-bebf-f3cf3e61840e" 00:19:05.990 ], 00:19:05.990 "product_name": "Malloc disk", 00:19:05.990 "block_size": 512, 00:19:05.990 "num_blocks": 65536, 00:19:05.990 "uuid": "3ec73f0a-20e2-4bdf-bebf-f3cf3e61840e", 00:19:05.990 "assigned_rate_limits": { 00:19:05.990 "rw_ios_per_sec": 0, 00:19:05.990 "rw_mbytes_per_sec": 0, 00:19:05.990 "r_mbytes_per_sec": 0, 00:19:05.990 "w_mbytes_per_sec": 0 00:19:05.990 }, 00:19:05.990 "claimed": true, 00:19:05.990 "claim_type": "exclusive_write", 00:19:05.990 "zoned": false, 00:19:05.990 "supported_io_types": { 00:19:05.990 "read": true, 00:19:05.990 "write": true, 00:19:05.990 "unmap": true, 00:19:05.990 "write_zeroes": true, 00:19:05.990 "flush": true, 00:19:05.990 "reset": true, 00:19:05.990 "compare": false, 00:19:05.990 "compare_and_write": false, 00:19:05.990 "abort": true, 00:19:05.990 "nvme_admin": false, 00:19:05.990 "nvme_io": false 00:19:05.990 }, 00:19:05.990 "memory_domains": [ 00:19:05.990 { 00:19:05.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.990 "dma_device_type": 2 00:19:05.990 } 00:19:05.990 ], 00:19:05.990 "driver_specific": {} 00:19:05.990 } 00:19:05.990 ] 00:19:05.990 13:42:45 -- common/autotest_common.sh@895 -- # return 0 00:19:05.990 13:42:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:05.990 13:42:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:05.990 13:42:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:05.990 13:42:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.990 13:42:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.991 13:42:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.251 13:42:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.251 "name": "Existed_Raid", 00:19:06.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.251 "strip_size_kb": 64, 00:19:06.251 "state": "configuring", 00:19:06.251 "raid_level": "concat", 00:19:06.251 "superblock": false, 00:19:06.251 "num_base_bdevs": 4, 00:19:06.251 "num_base_bdevs_discovered": 3, 00:19:06.251 "num_base_bdevs_operational": 4, 00:19:06.251 "base_bdevs_list": [ 00:19:06.251 { 00:19:06.251 "name": "BaseBdev1", 00:19:06.251 "uuid": "2c9013e2-76a7-4852-af40-995d5de7b668", 00:19:06.251 "is_configured": true, 00:19:06.251 "data_offset": 0, 00:19:06.251 "data_size": 65536 00:19:06.251 }, 00:19:06.251 { 00:19:06.251 "name": "BaseBdev2", 00:19:06.251 "uuid": "634d7519-1c78-45f8-9976-11022ddbc753", 00:19:06.251 "is_configured": true, 00:19:06.251 "data_offset": 0, 00:19:06.251 "data_size": 65536 00:19:06.251 }, 00:19:06.251 { 00:19:06.251 "name": "BaseBdev3", 00:19:06.251 "uuid": "3ec73f0a-20e2-4bdf-bebf-f3cf3e61840e", 00:19:06.251 "is_configured": true, 00:19:06.251 "data_offset": 0, 00:19:06.251 "data_size": 65536 00:19:06.251 }, 00:19:06.251 { 00:19:06.251 "name": "BaseBdev4", 00:19:06.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.251 "is_configured": false, 00:19:06.251 "data_offset": 0, 00:19:06.251 "data_size": 0 00:19:06.251 } 00:19:06.251 ] 00:19:06.251 }' 00:19:06.251 13:42:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.251 13:42:45 -- common/autotest_common.sh@10 -- # set +x 00:19:06.818 13:42:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:07.078 [2024-07-10 13:42:46.276352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:07.078 [2024-07-10 13:42:46.276445] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:07.078 [2024-07-10 13:42:46.276465] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:07.078 [2024-07-10 13:42:46.276622] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:07.078 [2024-07-10 13:42:46.276929] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:07.078 [2024-07-10 13:42:46.276970] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:07.078 [2024-07-10 13:42:46.277217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.078 BaseBdev4 00:19:07.078 13:42:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:07.078 13:42:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:07.078 13:42:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:07.078 13:42:46 -- common/autotest_common.sh@889 -- # local i 00:19:07.078 13:42:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:07.078 13:42:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:07.078 13:42:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.337 13:42:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:07.337 [ 00:19:07.337 { 00:19:07.337 "name": "BaseBdev4", 00:19:07.337 "aliases": [ 00:19:07.337 "f002fdff-bd10-4f7e-bdf5-c72e343e8dca" 00:19:07.337 ], 00:19:07.337 "product_name": "Malloc disk", 00:19:07.337 "block_size": 512, 00:19:07.337 "num_blocks": 65536, 00:19:07.337 "uuid": "f002fdff-bd10-4f7e-bdf5-c72e343e8dca", 00:19:07.337 "assigned_rate_limits": { 00:19:07.338 "rw_ios_per_sec": 0, 00:19:07.338 "rw_mbytes_per_sec": 0, 00:19:07.338 "r_mbytes_per_sec": 0, 00:19:07.338 "w_mbytes_per_sec": 0 00:19:07.338 }, 00:19:07.338 "claimed": true, 00:19:07.338 "claim_type": "exclusive_write", 00:19:07.338 "zoned": false, 00:19:07.338 "supported_io_types": { 00:19:07.338 "read": true, 00:19:07.338 "write": true, 00:19:07.338 "unmap": true, 00:19:07.338 "write_zeroes": true, 00:19:07.338 "flush": true, 00:19:07.338 "reset": true, 00:19:07.338 "compare": false, 00:19:07.338 "compare_and_write": false, 00:19:07.338 "abort": true, 00:19:07.338 "nvme_admin": false, 00:19:07.338 "nvme_io": false 00:19:07.338 }, 00:19:07.338 "memory_domains": [ 00:19:07.338 { 00:19:07.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.338 "dma_device_type": 2 00:19:07.338 } 00:19:07.338 ], 00:19:07.338 "driver_specific": {} 00:19:07.338 } 00:19:07.338 ] 00:19:07.338 13:42:46 -- common/autotest_common.sh@895 -- # return 0 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.338 13:42:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.597 13:42:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.597 "name": "Existed_Raid", 00:19:07.597 "uuid": "06b49840-16d0-4a1d-904f-496cd58b0f15", 00:19:07.597 "strip_size_kb": 64, 00:19:07.597 "state": "online", 00:19:07.597 "raid_level": "concat", 00:19:07.597 "superblock": false, 00:19:07.597 "num_base_bdevs": 4, 00:19:07.597 "num_base_bdevs_discovered": 4, 00:19:07.597 "num_base_bdevs_operational": 4, 00:19:07.597 "base_bdevs_list": [ 00:19:07.597 { 00:19:07.597 "name": "BaseBdev1", 00:19:07.597 "uuid": "2c9013e2-76a7-4852-af40-995d5de7b668", 00:19:07.597 "is_configured": true, 00:19:07.597 "data_offset": 0, 00:19:07.597 "data_size": 65536 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "name": "BaseBdev2", 00:19:07.597 "uuid": "634d7519-1c78-45f8-9976-11022ddbc753", 00:19:07.597 "is_configured": true, 00:19:07.597 "data_offset": 0, 00:19:07.597 "data_size": 65536 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "name": "BaseBdev3", 00:19:07.597 "uuid": "3ec73f0a-20e2-4bdf-bebf-f3cf3e61840e", 00:19:07.597 "is_configured": true, 00:19:07.597 "data_offset": 0, 00:19:07.597 "data_size": 65536 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "name": "BaseBdev4", 00:19:07.597 "uuid": "f002fdff-bd10-4f7e-bdf5-c72e343e8dca", 00:19:07.597 "is_configured": true, 00:19:07.597 "data_offset": 0, 00:19:07.597 "data_size": 65536 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }' 00:19:07.597 13:42:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.597 13:42:46 -- common/autotest_common.sh@10 -- # set +x 00:19:08.165 13:42:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:08.424 [2024-07-10 13:42:47.562277] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.424 [2024-07-10 13:42:47.562361] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.424 [2024-07-10 13:42:47.562433] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.424 13:42:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.683 13:42:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.683 "name": "Existed_Raid", 00:19:08.683 "uuid": "06b49840-16d0-4a1d-904f-496cd58b0f15", 00:19:08.683 "strip_size_kb": 64, 00:19:08.683 "state": "offline", 00:19:08.683 "raid_level": "concat", 00:19:08.683 "superblock": false, 00:19:08.683 "num_base_bdevs": 4, 00:19:08.683 "num_base_bdevs_discovered": 3, 00:19:08.683 "num_base_bdevs_operational": 3, 00:19:08.683 "base_bdevs_list": [ 00:19:08.683 { 00:19:08.683 "name": null, 00:19:08.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.683 "is_configured": false, 00:19:08.683 "data_offset": 0, 00:19:08.683 "data_size": 65536 00:19:08.683 }, 00:19:08.683 { 00:19:08.683 "name": "BaseBdev2", 00:19:08.683 "uuid": "634d7519-1c78-45f8-9976-11022ddbc753", 00:19:08.683 "is_configured": true, 00:19:08.683 "data_offset": 0, 00:19:08.683 "data_size": 65536 00:19:08.683 }, 00:19:08.683 { 00:19:08.683 "name": "BaseBdev3", 00:19:08.683 "uuid": "3ec73f0a-20e2-4bdf-bebf-f3cf3e61840e", 00:19:08.683 "is_configured": true, 00:19:08.683 "data_offset": 0, 00:19:08.683 "data_size": 65536 00:19:08.683 }, 00:19:08.683 { 00:19:08.683 "name": "BaseBdev4", 00:19:08.683 "uuid": "f002fdff-bd10-4f7e-bdf5-c72e343e8dca", 00:19:08.683 "is_configured": true, 00:19:08.683 "data_offset": 0, 00:19:08.683 "data_size": 65536 00:19:08.683 } 00:19:08.683 ] 00:19:08.683 }' 00:19:08.683 13:42:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.683 13:42:47 -- common/autotest_common.sh@10 -- # set +x 00:19:09.251 13:42:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:09.251 13:42:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:09.251 13:42:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.251 13:42:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:09.510 13:42:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:09.510 13:42:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.510 13:42:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:09.510 [2024-07-10 13:42:48.780222] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.769 13:42:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:09.769 13:42:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:09.769 13:42:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.769 13:42:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:09.769 13:42:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:09.769 13:42:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.769 13:42:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:10.028 [2024-07-10 13:42:49.235854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:10.028 13:42:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:10.028 13:42:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:10.028 13:42:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.028 13:42:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:10.287 13:42:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:10.287 13:42:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.287 13:42:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:10.547 [2024-07-10 13:42:49.709809] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:10.547 [2024-07-10 13:42:49.709920] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:10.547 13:42:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:10.547 13:42:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:10.547 13:42:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.547 13:42:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.807 13:42:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:10.807 13:42:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:10.807 13:42:50 -- bdev/bdev_raid.sh@287 -- # killprocess 122892 00:19:10.807 13:42:50 -- common/autotest_common.sh@926 -- # '[' -z 122892 ']' 00:19:10.807 13:42:50 -- common/autotest_common.sh@930 -- # kill -0 122892 00:19:10.807 13:42:50 -- common/autotest_common.sh@931 -- # uname 00:19:10.807 13:42:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:10.807 13:42:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122892 00:19:10.807 killing process with pid 122892 00:19:10.807 13:42:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:10.807 13:42:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:10.807 13:42:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122892' 00:19:10.807 13:42:50 -- common/autotest_common.sh@945 -- # kill 122892 00:19:10.807 13:42:50 -- common/autotest_common.sh@950 -- # wait 122892 00:19:10.807 [2024-07-10 13:42:50.040913] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.807 [2024-07-10 13:42:50.041027] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.185 ************************************ 00:19:12.185 END TEST raid_state_function_test 00:19:12.185 ************************************ 00:19:12.185 13:42:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:12.185 00:19:12.185 real 0m12.944s 00:19:12.185 user 0m22.667s 00:19:12.185 sys 0m1.542s 00:19:12.185 13:42:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.185 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:19:12.185 13:42:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:12.185 13:42:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:12.185 13:42:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:12.186 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:19:12.186 ************************************ 00:19:12.186 START TEST raid_state_function_test_sb 00:19:12.186 ************************************ 00:19:12.186 13:42:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=123332 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123332' 00:19:12.186 Process raid pid: 123332 00:19:12.186 13:42:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123332 /var/tmp/spdk-raid.sock 00:19:12.186 13:42:51 -- common/autotest_common.sh@819 -- # '[' -z 123332 ']' 00:19:12.186 13:42:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:12.186 13:42:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:12.186 13:42:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:12.186 13:42:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:12.186 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:19:12.186 [2024-07-10 13:42:51.468124] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:12.186 [2024-07-10 13:42:51.468306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.444 [2024-07-10 13:42:51.623371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.703 [2024-07-10 13:42:51.819850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.703 [2024-07-10 13:42:52.015248] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.962 13:42:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.962 13:42:52 -- common/autotest_common.sh@852 -- # return 0 00:19:12.962 13:42:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:13.221 [2024-07-10 13:42:52.434166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.221 [2024-07-10 13:42:52.434300] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.221 [2024-07-10 13:42:52.434345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:13.221 [2024-07-10 13:42:52.434373] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:13.221 [2024-07-10 13:42:52.434387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:13.221 [2024-07-10 13:42:52.434424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:13.221 [2024-07-10 13:42:52.434441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:13.221 [2024-07-10 13:42:52.434466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.221 13:42:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.481 13:42:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.481 "name": "Existed_Raid", 00:19:13.481 "uuid": "407669fd-8609-4ee8-b91b-87c823cd7e80", 00:19:13.481 "strip_size_kb": 64, 00:19:13.481 "state": "configuring", 00:19:13.481 "raid_level": "concat", 00:19:13.481 "superblock": true, 00:19:13.481 "num_base_bdevs": 4, 00:19:13.481 "num_base_bdevs_discovered": 0, 00:19:13.481 "num_base_bdevs_operational": 4, 00:19:13.481 "base_bdevs_list": [ 00:19:13.481 { 00:19:13.481 "name": "BaseBdev1", 00:19:13.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.481 "is_configured": false, 00:19:13.481 "data_offset": 0, 00:19:13.481 "data_size": 0 00:19:13.481 }, 00:19:13.481 { 00:19:13.481 "name": "BaseBdev2", 00:19:13.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.481 "is_configured": false, 00:19:13.481 "data_offset": 0, 00:19:13.481 "data_size": 0 00:19:13.481 }, 00:19:13.481 { 00:19:13.481 "name": "BaseBdev3", 00:19:13.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.481 "is_configured": false, 00:19:13.481 "data_offset": 0, 00:19:13.481 "data_size": 0 00:19:13.481 }, 00:19:13.481 { 00:19:13.481 "name": "BaseBdev4", 00:19:13.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.481 "is_configured": false, 00:19:13.481 "data_offset": 0, 00:19:13.481 "data_size": 0 00:19:13.481 } 00:19:13.481 ] 00:19:13.481 }' 00:19:13.481 13:42:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.481 13:42:52 -- common/autotest_common.sh@10 -- # set +x 00:19:14.046 13:42:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.304 [2024-07-10 13:42:53.408378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.304 [2024-07-10 13:42:53.408481] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:14.304 13:42:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.304 [2024-07-10 13:42:53.584166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.304 [2024-07-10 13:42:53.584285] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.304 [2024-07-10 13:42:53.584309] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.304 [2024-07-10 13:42:53.584346] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.304 [2024-07-10 13:42:53.584363] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.304 [2024-07-10 13:42:53.584397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.304 [2024-07-10 13:42:53.584411] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.305 [2024-07-10 13:42:53.584437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.305 13:42:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.563 [2024-07-10 13:42:53.799054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.563 BaseBdev1 00:19:14.563 13:42:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:14.563 13:42:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:14.563 13:42:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:14.563 13:42:53 -- common/autotest_common.sh@889 -- # local i 00:19:14.563 13:42:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:14.563 13:42:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:14.563 13:42:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.821 13:42:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.821 [ 00:19:14.821 { 00:19:14.821 "name": "BaseBdev1", 00:19:14.821 "aliases": [ 00:19:14.821 "d039cb24-4dea-40dc-af17-a87df2ea16f0" 00:19:14.821 ], 00:19:14.821 "product_name": "Malloc disk", 00:19:14.821 "block_size": 512, 00:19:14.821 "num_blocks": 65536, 00:19:14.821 "uuid": "d039cb24-4dea-40dc-af17-a87df2ea16f0", 00:19:14.821 "assigned_rate_limits": { 00:19:14.821 "rw_ios_per_sec": 0, 00:19:14.821 "rw_mbytes_per_sec": 0, 00:19:14.821 "r_mbytes_per_sec": 0, 00:19:14.821 "w_mbytes_per_sec": 0 00:19:14.821 }, 00:19:14.821 "claimed": true, 00:19:14.821 "claim_type": "exclusive_write", 00:19:14.821 "zoned": false, 00:19:14.821 "supported_io_types": { 00:19:14.821 "read": true, 00:19:14.821 "write": true, 00:19:14.821 "unmap": true, 00:19:14.821 "write_zeroes": true, 00:19:14.821 "flush": true, 00:19:14.821 "reset": true, 00:19:14.821 "compare": false, 00:19:14.821 "compare_and_write": false, 00:19:14.821 "abort": true, 00:19:14.821 "nvme_admin": false, 00:19:14.821 "nvme_io": false 00:19:14.821 }, 00:19:14.821 "memory_domains": [ 00:19:14.821 { 00:19:14.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.821 "dma_device_type": 2 00:19:14.821 } 00:19:14.821 ], 00:19:14.821 "driver_specific": {} 00:19:14.821 } 00:19:14.821 ] 00:19:15.080 13:42:54 -- common/autotest_common.sh@895 -- # return 0 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.080 "name": "Existed_Raid", 00:19:15.080 "uuid": "eb15b9fd-832f-4c21-89de-02e0e74ea2f6", 00:19:15.080 "strip_size_kb": 64, 00:19:15.080 "state": "configuring", 00:19:15.080 "raid_level": "concat", 00:19:15.080 "superblock": true, 00:19:15.080 "num_base_bdevs": 4, 00:19:15.080 "num_base_bdevs_discovered": 1, 00:19:15.080 "num_base_bdevs_operational": 4, 00:19:15.080 "base_bdevs_list": [ 00:19:15.080 { 00:19:15.080 "name": "BaseBdev1", 00:19:15.080 "uuid": "d039cb24-4dea-40dc-af17-a87df2ea16f0", 00:19:15.080 "is_configured": true, 00:19:15.080 "data_offset": 2048, 00:19:15.080 "data_size": 63488 00:19:15.080 }, 00:19:15.080 { 00:19:15.080 "name": "BaseBdev2", 00:19:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.080 "is_configured": false, 00:19:15.080 "data_offset": 0, 00:19:15.080 "data_size": 0 00:19:15.080 }, 00:19:15.080 { 00:19:15.080 "name": "BaseBdev3", 00:19:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.080 "is_configured": false, 00:19:15.080 "data_offset": 0, 00:19:15.080 "data_size": 0 00:19:15.080 }, 00:19:15.080 { 00:19:15.080 "name": "BaseBdev4", 00:19:15.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.080 "is_configured": false, 00:19:15.080 "data_offset": 0, 00:19:15.080 "data_size": 0 00:19:15.080 } 00:19:15.080 ] 00:19:15.080 }' 00:19:15.080 13:42:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.080 13:42:54 -- common/autotest_common.sh@10 -- # set +x 00:19:15.648 13:42:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:15.907 [2024-07-10 13:42:55.092862] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.907 [2024-07-10 13:42:55.092993] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:15.907 13:42:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:15.907 13:42:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:16.166 13:42:55 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:16.425 BaseBdev1 00:19:16.426 13:42:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:16.426 13:42:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:16.426 13:42:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:16.426 13:42:55 -- common/autotest_common.sh@889 -- # local i 00:19:16.426 13:42:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:16.426 13:42:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:16.426 13:42:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.426 13:42:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:16.684 [ 00:19:16.684 { 00:19:16.684 "name": "BaseBdev1", 00:19:16.684 "aliases": [ 00:19:16.684 "05238a8d-1d17-4a1d-bb63-ce4df5749997" 00:19:16.684 ], 00:19:16.684 "product_name": "Malloc disk", 00:19:16.684 "block_size": 512, 00:19:16.684 "num_blocks": 65536, 00:19:16.684 "uuid": "05238a8d-1d17-4a1d-bb63-ce4df5749997", 00:19:16.684 "assigned_rate_limits": { 00:19:16.684 "rw_ios_per_sec": 0, 00:19:16.684 "rw_mbytes_per_sec": 0, 00:19:16.684 "r_mbytes_per_sec": 0, 00:19:16.684 "w_mbytes_per_sec": 0 00:19:16.684 }, 00:19:16.684 "claimed": false, 00:19:16.684 "zoned": false, 00:19:16.684 "supported_io_types": { 00:19:16.684 "read": true, 00:19:16.684 "write": true, 00:19:16.684 "unmap": true, 00:19:16.684 "write_zeroes": true, 00:19:16.684 "flush": true, 00:19:16.684 "reset": true, 00:19:16.684 "compare": false, 00:19:16.684 "compare_and_write": false, 00:19:16.684 "abort": true, 00:19:16.684 "nvme_admin": false, 00:19:16.684 "nvme_io": false 00:19:16.684 }, 00:19:16.684 "memory_domains": [ 00:19:16.684 { 00:19:16.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.684 "dma_device_type": 2 00:19:16.684 } 00:19:16.684 ], 00:19:16.684 "driver_specific": {} 00:19:16.684 } 00:19:16.684 ] 00:19:16.684 13:42:55 -- common/autotest_common.sh@895 -- # return 0 00:19:16.684 13:42:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:16.943 [2024-07-10 13:42:56.106558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.943 [2024-07-10 13:42:56.108037] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.943 [2024-07-10 13:42:56.108142] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.943 [2024-07-10 13:42:56.108171] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:16.943 [2024-07-10 13:42:56.108200] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:16.943 [2024-07-10 13:42:56.108217] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:16.943 [2024-07-10 13:42:56.108236] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.943 13:42:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.201 13:42:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.201 "name": "Existed_Raid", 00:19:17.201 "uuid": "0d9d9462-5b51-49f3-b005-65e63296143b", 00:19:17.201 "strip_size_kb": 64, 00:19:17.201 "state": "configuring", 00:19:17.201 "raid_level": "concat", 00:19:17.201 "superblock": true, 00:19:17.201 "num_base_bdevs": 4, 00:19:17.201 "num_base_bdevs_discovered": 1, 00:19:17.201 "num_base_bdevs_operational": 4, 00:19:17.201 "base_bdevs_list": [ 00:19:17.201 { 00:19:17.201 "name": "BaseBdev1", 00:19:17.201 "uuid": "05238a8d-1d17-4a1d-bb63-ce4df5749997", 00:19:17.201 "is_configured": true, 00:19:17.201 "data_offset": 2048, 00:19:17.201 "data_size": 63488 00:19:17.201 }, 00:19:17.201 { 00:19:17.201 "name": "BaseBdev2", 00:19:17.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.201 "is_configured": false, 00:19:17.201 "data_offset": 0, 00:19:17.201 "data_size": 0 00:19:17.201 }, 00:19:17.201 { 00:19:17.201 "name": "BaseBdev3", 00:19:17.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.201 "is_configured": false, 00:19:17.201 "data_offset": 0, 00:19:17.201 "data_size": 0 00:19:17.201 }, 00:19:17.201 { 00:19:17.201 "name": "BaseBdev4", 00:19:17.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.201 "is_configured": false, 00:19:17.201 "data_offset": 0, 00:19:17.201 "data_size": 0 00:19:17.201 } 00:19:17.201 ] 00:19:17.201 }' 00:19:17.201 13:42:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.201 13:42:56 -- common/autotest_common.sh@10 -- # set +x 00:19:17.771 13:42:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:18.028 [2024-07-10 13:42:57.131142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.028 BaseBdev2 00:19:18.028 13:42:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:18.028 13:42:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:18.028 13:42:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:18.028 13:42:57 -- common/autotest_common.sh@889 -- # local i 00:19:18.028 13:42:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:18.028 13:42:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:18.028 13:42:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.029 13:42:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:18.286 [ 00:19:18.286 { 00:19:18.286 "name": "BaseBdev2", 00:19:18.286 "aliases": [ 00:19:18.286 "db77a6aa-d613-46db-8d64-406039b4b6c6" 00:19:18.286 ], 00:19:18.286 "product_name": "Malloc disk", 00:19:18.286 "block_size": 512, 00:19:18.286 "num_blocks": 65536, 00:19:18.286 "uuid": "db77a6aa-d613-46db-8d64-406039b4b6c6", 00:19:18.286 "assigned_rate_limits": { 00:19:18.286 "rw_ios_per_sec": 0, 00:19:18.286 "rw_mbytes_per_sec": 0, 00:19:18.286 "r_mbytes_per_sec": 0, 00:19:18.286 "w_mbytes_per_sec": 0 00:19:18.286 }, 00:19:18.286 "claimed": true, 00:19:18.286 "claim_type": "exclusive_write", 00:19:18.286 "zoned": false, 00:19:18.286 "supported_io_types": { 00:19:18.286 "read": true, 00:19:18.286 "write": true, 00:19:18.286 "unmap": true, 00:19:18.286 "write_zeroes": true, 00:19:18.286 "flush": true, 00:19:18.286 "reset": true, 00:19:18.286 "compare": false, 00:19:18.286 "compare_and_write": false, 00:19:18.286 "abort": true, 00:19:18.286 "nvme_admin": false, 00:19:18.286 "nvme_io": false 00:19:18.286 }, 00:19:18.286 "memory_domains": [ 00:19:18.286 { 00:19:18.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.286 "dma_device_type": 2 00:19:18.286 } 00:19:18.286 ], 00:19:18.286 "driver_specific": {} 00:19:18.286 } 00:19:18.286 ] 00:19:18.286 13:42:57 -- common/autotest_common.sh@895 -- # return 0 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.286 13:42:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.544 13:42:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.544 "name": "Existed_Raid", 00:19:18.544 "uuid": "0d9d9462-5b51-49f3-b005-65e63296143b", 00:19:18.544 "strip_size_kb": 64, 00:19:18.544 "state": "configuring", 00:19:18.544 "raid_level": "concat", 00:19:18.544 "superblock": true, 00:19:18.544 "num_base_bdevs": 4, 00:19:18.544 "num_base_bdevs_discovered": 2, 00:19:18.544 "num_base_bdevs_operational": 4, 00:19:18.544 "base_bdevs_list": [ 00:19:18.544 { 00:19:18.544 "name": "BaseBdev1", 00:19:18.544 "uuid": "05238a8d-1d17-4a1d-bb63-ce4df5749997", 00:19:18.544 "is_configured": true, 00:19:18.544 "data_offset": 2048, 00:19:18.544 "data_size": 63488 00:19:18.544 }, 00:19:18.544 { 00:19:18.544 "name": "BaseBdev2", 00:19:18.544 "uuid": "db77a6aa-d613-46db-8d64-406039b4b6c6", 00:19:18.544 "is_configured": true, 00:19:18.544 "data_offset": 2048, 00:19:18.544 "data_size": 63488 00:19:18.544 }, 00:19:18.544 { 00:19:18.544 "name": "BaseBdev3", 00:19:18.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.544 "is_configured": false, 00:19:18.544 "data_offset": 0, 00:19:18.544 "data_size": 0 00:19:18.544 }, 00:19:18.544 { 00:19:18.544 "name": "BaseBdev4", 00:19:18.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.544 "is_configured": false, 00:19:18.544 "data_offset": 0, 00:19:18.544 "data_size": 0 00:19:18.544 } 00:19:18.544 ] 00:19:18.544 }' 00:19:18.544 13:42:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.544 13:42:57 -- common/autotest_common.sh@10 -- # set +x 00:19:18.865 13:42:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:19.123 [2024-07-10 13:42:58.415051] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.123 BaseBdev3 00:19:19.123 13:42:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:19.123 13:42:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:19.123 13:42:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:19.123 13:42:58 -- common/autotest_common.sh@889 -- # local i 00:19:19.123 13:42:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:19.123 13:42:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:19.123 13:42:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.381 13:42:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:19.381 [ 00:19:19.381 { 00:19:19.381 "name": "BaseBdev3", 00:19:19.381 "aliases": [ 00:19:19.381 "dbe7dbbd-b75a-4e15-b959-f12116148242" 00:19:19.381 ], 00:19:19.381 "product_name": "Malloc disk", 00:19:19.381 "block_size": 512, 00:19:19.381 "num_blocks": 65536, 00:19:19.381 "uuid": "dbe7dbbd-b75a-4e15-b959-f12116148242", 00:19:19.381 "assigned_rate_limits": { 00:19:19.381 "rw_ios_per_sec": 0, 00:19:19.381 "rw_mbytes_per_sec": 0, 00:19:19.381 "r_mbytes_per_sec": 0, 00:19:19.381 "w_mbytes_per_sec": 0 00:19:19.381 }, 00:19:19.381 "claimed": true, 00:19:19.381 "claim_type": "exclusive_write", 00:19:19.381 "zoned": false, 00:19:19.381 "supported_io_types": { 00:19:19.381 "read": true, 00:19:19.381 "write": true, 00:19:19.381 "unmap": true, 00:19:19.381 "write_zeroes": true, 00:19:19.381 "flush": true, 00:19:19.381 "reset": true, 00:19:19.381 "compare": false, 00:19:19.381 "compare_and_write": false, 00:19:19.381 "abort": true, 00:19:19.381 "nvme_admin": false, 00:19:19.381 "nvme_io": false 00:19:19.381 }, 00:19:19.381 "memory_domains": [ 00:19:19.381 { 00:19:19.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.381 "dma_device_type": 2 00:19:19.381 } 00:19:19.381 ], 00:19:19.381 "driver_specific": {} 00:19:19.381 } 00:19:19.381 ] 00:19:19.381 13:42:58 -- common/autotest_common.sh@895 -- # return 0 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.381 13:42:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.638 13:42:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.638 "name": "Existed_Raid", 00:19:19.638 "uuid": "0d9d9462-5b51-49f3-b005-65e63296143b", 00:19:19.638 "strip_size_kb": 64, 00:19:19.638 "state": "configuring", 00:19:19.638 "raid_level": "concat", 00:19:19.638 "superblock": true, 00:19:19.638 "num_base_bdevs": 4, 00:19:19.638 "num_base_bdevs_discovered": 3, 00:19:19.638 "num_base_bdevs_operational": 4, 00:19:19.638 "base_bdevs_list": [ 00:19:19.638 { 00:19:19.638 "name": "BaseBdev1", 00:19:19.638 "uuid": "05238a8d-1d17-4a1d-bb63-ce4df5749997", 00:19:19.638 "is_configured": true, 00:19:19.638 "data_offset": 2048, 00:19:19.638 "data_size": 63488 00:19:19.638 }, 00:19:19.638 { 00:19:19.638 "name": "BaseBdev2", 00:19:19.638 "uuid": "db77a6aa-d613-46db-8d64-406039b4b6c6", 00:19:19.638 "is_configured": true, 00:19:19.638 "data_offset": 2048, 00:19:19.638 "data_size": 63488 00:19:19.638 }, 00:19:19.638 { 00:19:19.638 "name": "BaseBdev3", 00:19:19.638 "uuid": "dbe7dbbd-b75a-4e15-b959-f12116148242", 00:19:19.638 "is_configured": true, 00:19:19.638 "data_offset": 2048, 00:19:19.638 "data_size": 63488 00:19:19.638 }, 00:19:19.638 { 00:19:19.638 "name": "BaseBdev4", 00:19:19.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.638 "is_configured": false, 00:19:19.638 "data_offset": 0, 00:19:19.638 "data_size": 0 00:19:19.638 } 00:19:19.638 ] 00:19:19.638 }' 00:19:19.638 13:42:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.638 13:42:58 -- common/autotest_common.sh@10 -- # set +x 00:19:20.203 13:42:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:20.461 [2024-07-10 13:42:59.682121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.461 [2024-07-10 13:42:59.682389] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:20.461 [2024-07-10 13:42:59.682436] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:20.461 [2024-07-10 13:42:59.682588] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:20.461 BaseBdev4 00:19:20.461 [2024-07-10 13:42:59.682915] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:20.461 [2024-07-10 13:42:59.682959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:20.461 [2024-07-10 13:42:59.683107] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.461 13:42:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:20.461 13:42:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:20.461 13:42:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:20.461 13:42:59 -- common/autotest_common.sh@889 -- # local i 00:19:20.461 13:42:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:20.461 13:42:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:20.461 13:42:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.719 13:42:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:20.719 [ 00:19:20.719 { 00:19:20.719 "name": "BaseBdev4", 00:19:20.719 "aliases": [ 00:19:20.719 "46208b26-958d-41a4-9e7c-5d077e8127d3" 00:19:20.719 ], 00:19:20.719 "product_name": "Malloc disk", 00:19:20.719 "block_size": 512, 00:19:20.719 "num_blocks": 65536, 00:19:20.719 "uuid": "46208b26-958d-41a4-9e7c-5d077e8127d3", 00:19:20.719 "assigned_rate_limits": { 00:19:20.719 "rw_ios_per_sec": 0, 00:19:20.719 "rw_mbytes_per_sec": 0, 00:19:20.719 "r_mbytes_per_sec": 0, 00:19:20.719 "w_mbytes_per_sec": 0 00:19:20.719 }, 00:19:20.719 "claimed": true, 00:19:20.719 "claim_type": "exclusive_write", 00:19:20.719 "zoned": false, 00:19:20.719 "supported_io_types": { 00:19:20.719 "read": true, 00:19:20.719 "write": true, 00:19:20.719 "unmap": true, 00:19:20.719 "write_zeroes": true, 00:19:20.719 "flush": true, 00:19:20.719 "reset": true, 00:19:20.719 "compare": false, 00:19:20.719 "compare_and_write": false, 00:19:20.719 "abort": true, 00:19:20.719 "nvme_admin": false, 00:19:20.719 "nvme_io": false 00:19:20.719 }, 00:19:20.719 "memory_domains": [ 00:19:20.719 { 00:19:20.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.719 "dma_device_type": 2 00:19:20.719 } 00:19:20.719 ], 00:19:20.719 "driver_specific": {} 00:19:20.719 } 00:19:20.719 ] 00:19:20.719 13:43:00 -- common/autotest_common.sh@895 -- # return 0 00:19:20.719 13:43:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.720 13:43:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.980 13:43:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.980 "name": "Existed_Raid", 00:19:20.980 "uuid": "0d9d9462-5b51-49f3-b005-65e63296143b", 00:19:20.980 "strip_size_kb": 64, 00:19:20.980 "state": "online", 00:19:20.980 "raid_level": "concat", 00:19:20.980 "superblock": true, 00:19:20.980 "num_base_bdevs": 4, 00:19:20.980 "num_base_bdevs_discovered": 4, 00:19:20.980 "num_base_bdevs_operational": 4, 00:19:20.980 "base_bdevs_list": [ 00:19:20.980 { 00:19:20.980 "name": "BaseBdev1", 00:19:20.980 "uuid": "05238a8d-1d17-4a1d-bb63-ce4df5749997", 00:19:20.980 "is_configured": true, 00:19:20.980 "data_offset": 2048, 00:19:20.980 "data_size": 63488 00:19:20.980 }, 00:19:20.980 { 00:19:20.980 "name": "BaseBdev2", 00:19:20.980 "uuid": "db77a6aa-d613-46db-8d64-406039b4b6c6", 00:19:20.980 "is_configured": true, 00:19:20.980 "data_offset": 2048, 00:19:20.980 "data_size": 63488 00:19:20.980 }, 00:19:20.980 { 00:19:20.980 "name": "BaseBdev3", 00:19:20.980 "uuid": "dbe7dbbd-b75a-4e15-b959-f12116148242", 00:19:20.980 "is_configured": true, 00:19:20.980 "data_offset": 2048, 00:19:20.980 "data_size": 63488 00:19:20.980 }, 00:19:20.980 { 00:19:20.980 "name": "BaseBdev4", 00:19:20.980 "uuid": "46208b26-958d-41a4-9e7c-5d077e8127d3", 00:19:20.980 "is_configured": true, 00:19:20.980 "data_offset": 2048, 00:19:20.980 "data_size": 63488 00:19:20.980 } 00:19:20.980 ] 00:19:20.980 }' 00:19:20.980 13:43:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.980 13:43:00 -- common/autotest_common.sh@10 -- # set +x 00:19:21.547 13:43:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:21.806 [2024-07-10 13:43:00.983991] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.806 [2024-07-10 13:43:00.984118] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.806 [2024-07-10 13:43:00.984194] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.806 13:43:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.064 13:43:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.064 "name": "Existed_Raid", 00:19:22.064 "uuid": "0d9d9462-5b51-49f3-b005-65e63296143b", 00:19:22.064 "strip_size_kb": 64, 00:19:22.064 "state": "offline", 00:19:22.064 "raid_level": "concat", 00:19:22.064 "superblock": true, 00:19:22.064 "num_base_bdevs": 4, 00:19:22.064 "num_base_bdevs_discovered": 3, 00:19:22.064 "num_base_bdevs_operational": 3, 00:19:22.064 "base_bdevs_list": [ 00:19:22.064 { 00:19:22.064 "name": null, 00:19:22.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.064 "is_configured": false, 00:19:22.064 "data_offset": 2048, 00:19:22.064 "data_size": 63488 00:19:22.064 }, 00:19:22.064 { 00:19:22.064 "name": "BaseBdev2", 00:19:22.064 "uuid": "db77a6aa-d613-46db-8d64-406039b4b6c6", 00:19:22.064 "is_configured": true, 00:19:22.064 "data_offset": 2048, 00:19:22.064 "data_size": 63488 00:19:22.064 }, 00:19:22.064 { 00:19:22.064 "name": "BaseBdev3", 00:19:22.064 "uuid": "dbe7dbbd-b75a-4e15-b959-f12116148242", 00:19:22.064 "is_configured": true, 00:19:22.064 "data_offset": 2048, 00:19:22.064 "data_size": 63488 00:19:22.064 }, 00:19:22.064 { 00:19:22.064 "name": "BaseBdev4", 00:19:22.064 "uuid": "46208b26-958d-41a4-9e7c-5d077e8127d3", 00:19:22.064 "is_configured": true, 00:19:22.064 "data_offset": 2048, 00:19:22.064 "data_size": 63488 00:19:22.064 } 00:19:22.064 ] 00:19:22.064 }' 00:19:22.064 13:43:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.064 13:43:01 -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 13:43:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:22.630 13:43:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.630 13:43:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.630 13:43:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:22.888 13:43:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:22.888 13:43:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.888 13:43:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:22.888 [2024-07-10 13:43:02.186546] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.146 13:43:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:23.405 [2024-07-10 13:43:02.649091] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.405 13:43:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.405 13:43:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.405 13:43:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.405 13:43:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.663 13:43:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.663 13:43:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.663 13:43:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:23.921 [2024-07-10 13:43:03.093957] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:23.921 [2024-07-10 13:43:03.094068] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:23.921 13:43:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.921 13:43:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.921 13:43:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:23.921 13:43:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.179 13:43:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:24.179 13:43:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:24.179 13:43:03 -- bdev/bdev_raid.sh@287 -- # killprocess 123332 00:19:24.179 13:43:03 -- common/autotest_common.sh@926 -- # '[' -z 123332 ']' 00:19:24.179 13:43:03 -- common/autotest_common.sh@930 -- # kill -0 123332 00:19:24.179 13:43:03 -- common/autotest_common.sh@931 -- # uname 00:19:24.179 13:43:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:24.179 13:43:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123332 00:19:24.179 13:43:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:24.179 13:43:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:24.179 13:43:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123332' 00:19:24.179 killing process with pid 123332 00:19:24.179 13:43:03 -- common/autotest_common.sh@945 -- # kill 123332 00:19:24.179 13:43:03 -- common/autotest_common.sh@950 -- # wait 123332 00:19:24.179 [2024-07-10 13:43:03.408886] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.179 [2024-07-10 13:43:03.409014] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.553 ************************************ 00:19:25.553 END TEST raid_state_function_test_sb 00:19:25.553 ************************************ 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:25.553 00:19:25.553 real 0m13.267s 00:19:25.553 user 0m23.255s 00:19:25.553 sys 0m1.469s 00:19:25.553 13:43:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.553 13:43:04 -- common/autotest_common.sh@10 -- # set +x 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:25.553 13:43:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:25.553 13:43:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.553 13:43:04 -- common/autotest_common.sh@10 -- # set +x 00:19:25.553 ************************************ 00:19:25.553 START TEST raid_superblock_test 00:19:25.553 ************************************ 00:19:25.553 13:43:04 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@357 -- # raid_pid=123781 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:25.553 13:43:04 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123781 /var/tmp/spdk-raid.sock 00:19:25.553 13:43:04 -- common/autotest_common.sh@819 -- # '[' -z 123781 ']' 00:19:25.553 13:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.553 13:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.553 13:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.553 13:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.553 13:43:04 -- common/autotest_common.sh@10 -- # set +x 00:19:25.553 [2024-07-10 13:43:04.790946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:25.553 [2024-07-10 13:43:04.791119] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123781 ] 00:19:25.810 [2024-07-10 13:43:04.945036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.810 [2024-07-10 13:43:05.131007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.067 [2024-07-10 13:43:05.333365] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.325 13:43:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.325 13:43:05 -- common/autotest_common.sh@852 -- # return 0 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.325 13:43:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:26.584 malloc1 00:19:26.584 13:43:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.848 [2024-07-10 13:43:05.999836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.848 [2024-07-10 13:43:06.000021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.848 [2024-07-10 13:43:06.000070] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:26.848 [2024-07-10 13:43:06.000168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.848 [2024-07-10 13:43:06.002239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.848 [2024-07-10 13:43:06.002332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.848 pt1 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.848 13:43:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:27.106 malloc2 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.107 [2024-07-10 13:43:06.435580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.107 [2024-07-10 13:43:06.435745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.107 [2024-07-10 13:43:06.435801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:27.107 [2024-07-10 13:43:06.435884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.107 [2024-07-10 13:43:06.437882] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.107 [2024-07-10 13:43:06.437972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.107 pt2 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.107 13:43:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:27.364 malloc3 00:19:27.364 13:43:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:27.622 [2024-07-10 13:43:06.837742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:27.622 [2024-07-10 13:43:06.837916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.622 [2024-07-10 13:43:06.837990] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:27.622 [2024-07-10 13:43:06.838068] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.622 [2024-07-10 13:43:06.840092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.622 [2024-07-10 13:43:06.840215] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:27.622 pt3 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.622 13:43:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:27.881 malloc4 00:19:27.881 13:43:07 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:28.140 [2024-07-10 13:43:07.270746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:28.140 [2024-07-10 13:43:07.270915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.140 [2024-07-10 13:43:07.270973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:28.140 [2024-07-10 13:43:07.271049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.140 [2024-07-10 13:43:07.273098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.140 [2024-07-10 13:43:07.273191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:28.140 pt4 00:19:28.140 13:43:07 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:28.140 13:43:07 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:28.140 13:43:07 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:28.399 [2024-07-10 13:43:07.502417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:28.399 [2024-07-10 13:43:07.504233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.399 [2024-07-10 13:43:07.504337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.399 [2024-07-10 13:43:07.504419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:28.399 [2024-07-10 13:43:07.504668] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:28.399 [2024-07-10 13:43:07.504709] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:28.399 [2024-07-10 13:43:07.504878] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:28.399 [2024-07-10 13:43:07.505263] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:28.399 [2024-07-10 13:43:07.505314] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:28.399 [2024-07-10 13:43:07.505498] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.399 "name": "raid_bdev1", 00:19:28.399 "uuid": "9017cc4d-67f9-4154-8cc3-7fb1fc52f332", 00:19:28.399 "strip_size_kb": 64, 00:19:28.399 "state": "online", 00:19:28.399 "raid_level": "concat", 00:19:28.399 "superblock": true, 00:19:28.399 "num_base_bdevs": 4, 00:19:28.399 "num_base_bdevs_discovered": 4, 00:19:28.399 "num_base_bdevs_operational": 4, 00:19:28.399 "base_bdevs_list": [ 00:19:28.399 { 00:19:28.399 "name": "pt1", 00:19:28.399 "uuid": "03aedc7f-5eed-5352-8ded-2abf90208cde", 00:19:28.399 "is_configured": true, 00:19:28.399 "data_offset": 2048, 00:19:28.399 "data_size": 63488 00:19:28.399 }, 00:19:28.399 { 00:19:28.399 "name": "pt2", 00:19:28.399 "uuid": "11ced6b4-382f-5b10-9d11-60e5f1831da7", 00:19:28.399 "is_configured": true, 00:19:28.399 "data_offset": 2048, 00:19:28.399 "data_size": 63488 00:19:28.399 }, 00:19:28.399 { 00:19:28.399 "name": "pt3", 00:19:28.399 "uuid": "e629f1e8-89a1-5f9e-be2c-60c7a7486c12", 00:19:28.399 "is_configured": true, 00:19:28.399 "data_offset": 2048, 00:19:28.399 "data_size": 63488 00:19:28.399 }, 00:19:28.399 { 00:19:28.399 "name": "pt4", 00:19:28.399 "uuid": "09313726-e2d2-580a-84f5-25788a8eebad", 00:19:28.399 "is_configured": true, 00:19:28.399 "data_offset": 2048, 00:19:28.399 "data_size": 63488 00:19:28.399 } 00:19:28.399 ] 00:19:28.399 }' 00:19:28.399 13:43:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.399 13:43:07 -- common/autotest_common.sh@10 -- # set +x 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:29.334 [2024-07-10 13:43:08.484883] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9017cc4d-67f9-4154-8cc3-7fb1fc52f332 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@380 -- # '[' -z 9017cc4d-67f9-4154-8cc3-7fb1fc52f332 ']' 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:29.334 [2024-07-10 13:43:08.660350] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.334 [2024-07-10 13:43:08.660468] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.334 [2024-07-10 13:43:08.660597] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.334 [2024-07-10 13:43:08.660720] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.334 [2024-07-10 13:43:08.660757] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.334 13:43:08 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:29.592 13:43:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:29.592 13:43:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:29.592 13:43:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.592 13:43:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:29.850 13:43:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.850 13:43:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:30.108 13:43:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.108 13:43:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:30.108 13:43:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.108 13:43:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:30.367 13:43:09 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:30.367 13:43:09 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:30.626 13:43:09 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:30.626 13:43:09 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:30.626 13:43:09 -- common/autotest_common.sh@640 -- # local es=0 00:19:30.626 13:43:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:30.626 13:43:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.626 13:43:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:30.626 13:43:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.626 13:43:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:30.626 13:43:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.626 13:43:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:30.626 13:43:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.626 13:43:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:30.626 13:43:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:30.626 [2024-07-10 13:43:09.946056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:30.626 [2024-07-10 13:43:09.947903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:30.626 [2024-07-10 13:43:09.948016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:30.626 [2024-07-10 13:43:09.948078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:30.626 [2024-07-10 13:43:09.948201] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:30.626 [2024-07-10 13:43:09.948303] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:30.626 [2024-07-10 13:43:09.948360] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:30.626 [2024-07-10 13:43:09.948437] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:30.626 [2024-07-10 13:43:09.948484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:30.626 [2024-07-10 13:43:09.948512] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:30.626 request: 00:19:30.626 { 00:19:30.626 "name": "raid_bdev1", 00:19:30.626 "raid_level": "concat", 00:19:30.626 "base_bdevs": [ 00:19:30.626 "malloc1", 00:19:30.626 "malloc2", 00:19:30.626 "malloc3", 00:19:30.626 "malloc4" 00:19:30.626 ], 00:19:30.626 "superblock": false, 00:19:30.626 "strip_size_kb": 64, 00:19:30.626 "method": "bdev_raid_create", 00:19:30.626 "req_id": 1 00:19:30.626 } 00:19:30.626 Got JSON-RPC error response 00:19:30.626 response: 00:19:30.626 { 00:19:30.626 "code": -17, 00:19:30.626 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:30.626 } 00:19:30.626 13:43:09 -- common/autotest_common.sh@643 -- # es=1 00:19:30.626 13:43:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:30.626 13:43:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:30.626 13:43:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:30.626 13:43:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.626 13:43:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:30.884 13:43:10 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:30.884 13:43:10 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:30.884 13:43:10 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:31.143 [2024-07-10 13:43:10.309387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:31.143 [2024-07-10 13:43:10.309545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.143 [2024-07-10 13:43:10.309591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:31.143 [2024-07-10 13:43:10.309658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.143 [2024-07-10 13:43:10.311564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.143 [2024-07-10 13:43:10.311670] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:31.143 [2024-07-10 13:43:10.311824] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:31.143 [2024-07-10 13:43:10.311908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:31.143 pt1 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.143 13:43:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.401 13:43:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.401 "name": "raid_bdev1", 00:19:31.401 "uuid": "9017cc4d-67f9-4154-8cc3-7fb1fc52f332", 00:19:31.401 "strip_size_kb": 64, 00:19:31.401 "state": "configuring", 00:19:31.401 "raid_level": "concat", 00:19:31.401 "superblock": true, 00:19:31.401 "num_base_bdevs": 4, 00:19:31.401 "num_base_bdevs_discovered": 1, 00:19:31.401 "num_base_bdevs_operational": 4, 00:19:31.401 "base_bdevs_list": [ 00:19:31.401 { 00:19:31.401 "name": "pt1", 00:19:31.401 "uuid": "03aedc7f-5eed-5352-8ded-2abf90208cde", 00:19:31.401 "is_configured": true, 00:19:31.401 "data_offset": 2048, 00:19:31.401 "data_size": 63488 00:19:31.401 }, 00:19:31.401 { 00:19:31.401 "name": null, 00:19:31.401 "uuid": "11ced6b4-382f-5b10-9d11-60e5f1831da7", 00:19:31.401 "is_configured": false, 00:19:31.401 "data_offset": 2048, 00:19:31.401 "data_size": 63488 00:19:31.401 }, 00:19:31.401 { 00:19:31.401 "name": null, 00:19:31.401 "uuid": "e629f1e8-89a1-5f9e-be2c-60c7a7486c12", 00:19:31.401 "is_configured": false, 00:19:31.401 "data_offset": 2048, 00:19:31.401 "data_size": 63488 00:19:31.401 }, 00:19:31.401 { 00:19:31.401 "name": null, 00:19:31.401 "uuid": "09313726-e2d2-580a-84f5-25788a8eebad", 00:19:31.401 "is_configured": false, 00:19:31.401 "data_offset": 2048, 00:19:31.401 "data_size": 63488 00:19:31.401 } 00:19:31.401 ] 00:19:31.401 }' 00:19:31.401 13:43:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.401 13:43:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.983 13:43:11 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:31.983 13:43:11 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:31.983 [2024-07-10 13:43:11.207892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:31.983 [2024-07-10 13:43:11.208052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.983 [2024-07-10 13:43:11.208114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:31.983 [2024-07-10 13:43:11.208172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.983 [2024-07-10 13:43:11.208674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.983 [2024-07-10 13:43:11.208765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:31.983 [2024-07-10 13:43:11.208930] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:31.983 [2024-07-10 13:43:11.208987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.983 pt2 00:19:31.983 13:43:11 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:32.262 [2024-07-10 13:43:11.387602] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.262 "name": "raid_bdev1", 00:19:32.262 "uuid": "9017cc4d-67f9-4154-8cc3-7fb1fc52f332", 00:19:32.262 "strip_size_kb": 64, 00:19:32.262 "state": "configuring", 00:19:32.262 "raid_level": "concat", 00:19:32.262 "superblock": true, 00:19:32.262 "num_base_bdevs": 4, 00:19:32.262 "num_base_bdevs_discovered": 1, 00:19:32.262 "num_base_bdevs_operational": 4, 00:19:32.262 "base_bdevs_list": [ 00:19:32.262 { 00:19:32.262 "name": "pt1", 00:19:32.262 "uuid": "03aedc7f-5eed-5352-8ded-2abf90208cde", 00:19:32.262 "is_configured": true, 00:19:32.262 "data_offset": 2048, 00:19:32.262 "data_size": 63488 00:19:32.262 }, 00:19:32.262 { 00:19:32.262 "name": null, 00:19:32.262 "uuid": "11ced6b4-382f-5b10-9d11-60e5f1831da7", 00:19:32.262 "is_configured": false, 00:19:32.262 "data_offset": 2048, 00:19:32.262 "data_size": 63488 00:19:32.262 }, 00:19:32.262 { 00:19:32.262 "name": null, 00:19:32.262 "uuid": "e629f1e8-89a1-5f9e-be2c-60c7a7486c12", 00:19:32.262 "is_configured": false, 00:19:32.262 "data_offset": 2048, 00:19:32.262 "data_size": 63488 00:19:32.262 }, 00:19:32.262 { 00:19:32.262 "name": null, 00:19:32.262 "uuid": "09313726-e2d2-580a-84f5-25788a8eebad", 00:19:32.262 "is_configured": false, 00:19:32.262 "data_offset": 2048, 00:19:32.262 "data_size": 63488 00:19:32.262 } 00:19:32.262 ] 00:19:32.262 }' 00:19:32.262 13:43:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.262 13:43:11 -- common/autotest_common.sh@10 -- # set +x 00:19:33.200 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:33.200 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.200 13:43:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.200 [2024-07-10 13:43:12.353918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.200 [2024-07-10 13:43:12.354089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.200 [2024-07-10 13:43:12.354145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:33.200 [2024-07-10 13:43:12.354188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.200 [2024-07-10 13:43:12.354660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.200 [2024-07-10 13:43:12.354748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.200 [2024-07-10 13:43:12.354888] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:33.200 [2024-07-10 13:43:12.354938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.200 pt2 00:19:33.200 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:33.200 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.201 13:43:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:33.201 [2024-07-10 13:43:12.521644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:33.201 [2024-07-10 13:43:12.521795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.201 [2024-07-10 13:43:12.521840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:33.201 [2024-07-10 13:43:12.521882] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.201 [2024-07-10 13:43:12.522378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.201 [2024-07-10 13:43:12.522469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:33.201 [2024-07-10 13:43:12.522609] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:33.201 [2024-07-10 13:43:12.522660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:33.201 pt3 00:19:33.201 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:33.201 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.201 13:43:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:33.459 [2024-07-10 13:43:12.705332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:33.459 [2024-07-10 13:43:12.705482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.459 [2024-07-10 13:43:12.705538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:33.459 [2024-07-10 13:43:12.705583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.459 [2024-07-10 13:43:12.706033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.459 [2024-07-10 13:43:12.706118] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:33.459 [2024-07-10 13:43:12.706260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:33.459 [2024-07-10 13:43:12.706310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:33.459 [2024-07-10 13:43:12.706455] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:33.459 [2024-07-10 13:43:12.706490] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:33.459 [2024-07-10 13:43:12.706603] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:33.459 [2024-07-10 13:43:12.706917] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:33.459 [2024-07-10 13:43:12.706960] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:33.459 [2024-07-10 13:43:12.707106] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.459 pt4 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.459 13:43:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.716 13:43:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.716 "name": "raid_bdev1", 00:19:33.716 "uuid": "9017cc4d-67f9-4154-8cc3-7fb1fc52f332", 00:19:33.716 "strip_size_kb": 64, 00:19:33.716 "state": "online", 00:19:33.716 "raid_level": "concat", 00:19:33.716 "superblock": true, 00:19:33.716 "num_base_bdevs": 4, 00:19:33.716 "num_base_bdevs_discovered": 4, 00:19:33.716 "num_base_bdevs_operational": 4, 00:19:33.716 "base_bdevs_list": [ 00:19:33.716 { 00:19:33.716 "name": "pt1", 00:19:33.716 "uuid": "03aedc7f-5eed-5352-8ded-2abf90208cde", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "name": "pt2", 00:19:33.716 "uuid": "11ced6b4-382f-5b10-9d11-60e5f1831da7", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "name": "pt3", 00:19:33.716 "uuid": "e629f1e8-89a1-5f9e-be2c-60c7a7486c12", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 }, 00:19:33.716 { 00:19:33.716 "name": "pt4", 00:19:33.716 "uuid": "09313726-e2d2-580a-84f5-25788a8eebad", 00:19:33.716 "is_configured": true, 00:19:33.716 "data_offset": 2048, 00:19:33.716 "data_size": 63488 00:19:33.716 } 00:19:33.716 ] 00:19:33.716 }' 00:19:33.716 13:43:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.716 13:43:12 -- common/autotest_common.sh@10 -- # set +x 00:19:34.281 13:43:13 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:34.281 13:43:13 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:34.281 [2024-07-10 13:43:13.627910] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.540 13:43:13 -- bdev/bdev_raid.sh@430 -- # '[' 9017cc4d-67f9-4154-8cc3-7fb1fc52f332 '!=' 9017cc4d-67f9-4154-8cc3-7fb1fc52f332 ']' 00:19:34.540 13:43:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:34.540 13:43:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:34.540 13:43:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:34.540 13:43:13 -- bdev/bdev_raid.sh@511 -- # killprocess 123781 00:19:34.540 13:43:13 -- common/autotest_common.sh@926 -- # '[' -z 123781 ']' 00:19:34.540 13:43:13 -- common/autotest_common.sh@930 -- # kill -0 123781 00:19:34.540 13:43:13 -- common/autotest_common.sh@931 -- # uname 00:19:34.540 13:43:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.540 13:43:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123781 00:19:34.540 13:43:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.540 13:43:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.540 13:43:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123781' 00:19:34.540 killing process with pid 123781 00:19:34.540 13:43:13 -- common/autotest_common.sh@945 -- # kill 123781 00:19:34.540 13:43:13 -- common/autotest_common.sh@950 -- # wait 123781 00:19:34.540 [2024-07-10 13:43:13.666138] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.540 [2024-07-10 13:43:13.666212] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.540 [2024-07-10 13:43:13.666324] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.540 [2024-07-10 13:43:13.666361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:34.798 [2024-07-10 13:43:14.049420] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.174 ************************************ 00:19:36.174 END TEST raid_superblock_test 00:19:36.174 ************************************ 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:36.174 00:19:36.174 real 0m10.581s 00:19:36.174 user 0m18.021s 00:19:36.174 sys 0m1.187s 00:19:36.174 13:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.174 13:43:15 -- common/autotest_common.sh@10 -- # set +x 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:36.174 13:43:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:36.174 13:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:36.174 13:43:15 -- common/autotest_common.sh@10 -- # set +x 00:19:36.174 ************************************ 00:19:36.174 START TEST raid_state_function_test 00:19:36.174 ************************************ 00:19:36.174 13:43:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=124106 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:36.174 Process raid pid: 124106 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124106' 00:19:36.174 13:43:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124106 /var/tmp/spdk-raid.sock 00:19:36.174 13:43:15 -- common/autotest_common.sh@819 -- # '[' -z 124106 ']' 00:19:36.174 13:43:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:36.174 13:43:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.174 13:43:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:36.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:36.174 13:43:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.174 13:43:15 -- common/autotest_common.sh@10 -- # set +x 00:19:36.174 [2024-07-10 13:43:15.452215] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:36.174 [2024-07-10 13:43:15.452410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.433 [2024-07-10 13:43:15.593473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.433 [2024-07-10 13:43:15.786561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.691 [2024-07-10 13:43:15.976504] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.948 13:43:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.948 13:43:16 -- common/autotest_common.sh@852 -- # return 0 00:19:36.948 13:43:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:37.206 [2024-07-10 13:43:16.420729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.206 [2024-07-10 13:43:16.420865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.206 [2024-07-10 13:43:16.420910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.206 [2024-07-10 13:43:16.420937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.206 [2024-07-10 13:43:16.420960] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:37.206 [2024-07-10 13:43:16.421001] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:37.206 [2024-07-10 13:43:16.421049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:37.206 [2024-07-10 13:43:16.421077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.206 13:43:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.464 13:43:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.464 "name": "Existed_Raid", 00:19:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.464 "strip_size_kb": 0, 00:19:37.464 "state": "configuring", 00:19:37.464 "raid_level": "raid1", 00:19:37.464 "superblock": false, 00:19:37.464 "num_base_bdevs": 4, 00:19:37.464 "num_base_bdevs_discovered": 0, 00:19:37.464 "num_base_bdevs_operational": 4, 00:19:37.464 "base_bdevs_list": [ 00:19:37.464 { 00:19:37.464 "name": "BaseBdev1", 00:19:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.464 "is_configured": false, 00:19:37.464 "data_offset": 0, 00:19:37.464 "data_size": 0 00:19:37.464 }, 00:19:37.464 { 00:19:37.464 "name": "BaseBdev2", 00:19:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.464 "is_configured": false, 00:19:37.464 "data_offset": 0, 00:19:37.464 "data_size": 0 00:19:37.464 }, 00:19:37.464 { 00:19:37.464 "name": "BaseBdev3", 00:19:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.464 "is_configured": false, 00:19:37.464 "data_offset": 0, 00:19:37.464 "data_size": 0 00:19:37.464 }, 00:19:37.464 { 00:19:37.464 "name": "BaseBdev4", 00:19:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.464 "is_configured": false, 00:19:37.464 "data_offset": 0, 00:19:37.464 "data_size": 0 00:19:37.464 } 00:19:37.464 ] 00:19:37.464 }' 00:19:37.464 13:43:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.464 13:43:16 -- common/autotest_common.sh@10 -- # set +x 00:19:38.030 13:43:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:38.305 [2024-07-10 13:43:17.410919] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.305 [2024-07-10 13:43:17.410999] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:38.305 13:43:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:38.305 [2024-07-10 13:43:17.606610] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.305 [2024-07-10 13:43:17.606723] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.305 [2024-07-10 13:43:17.606748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.305 [2024-07-10 13:43:17.606784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.305 [2024-07-10 13:43:17.606801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.305 [2024-07-10 13:43:17.606854] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.305 [2024-07-10 13:43:17.606893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:38.305 [2024-07-10 13:43:17.606921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:38.305 13:43:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:38.579 [2024-07-10 13:43:17.817266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.579 BaseBdev1 00:19:38.579 13:43:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:38.579 13:43:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:38.579 13:43:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:38.579 13:43:17 -- common/autotest_common.sh@889 -- # local i 00:19:38.579 13:43:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:38.579 13:43:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:38.579 13:43:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:38.837 13:43:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:38.837 [ 00:19:38.837 { 00:19:38.837 "name": "BaseBdev1", 00:19:38.837 "aliases": [ 00:19:38.837 "77b1a8ef-a455-42f5-9d17-11d013e964fe" 00:19:38.837 ], 00:19:38.837 "product_name": "Malloc disk", 00:19:38.837 "block_size": 512, 00:19:38.837 "num_blocks": 65536, 00:19:38.837 "uuid": "77b1a8ef-a455-42f5-9d17-11d013e964fe", 00:19:38.837 "assigned_rate_limits": { 00:19:38.837 "rw_ios_per_sec": 0, 00:19:38.837 "rw_mbytes_per_sec": 0, 00:19:38.837 "r_mbytes_per_sec": 0, 00:19:38.837 "w_mbytes_per_sec": 0 00:19:38.837 }, 00:19:38.837 "claimed": true, 00:19:38.837 "claim_type": "exclusive_write", 00:19:38.837 "zoned": false, 00:19:38.837 "supported_io_types": { 00:19:38.837 "read": true, 00:19:38.837 "write": true, 00:19:38.837 "unmap": true, 00:19:38.837 "write_zeroes": true, 00:19:38.837 "flush": true, 00:19:38.837 "reset": true, 00:19:38.837 "compare": false, 00:19:38.837 "compare_and_write": false, 00:19:38.837 "abort": true, 00:19:38.837 "nvme_admin": false, 00:19:38.837 "nvme_io": false 00:19:38.837 }, 00:19:38.837 "memory_domains": [ 00:19:38.837 { 00:19:38.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.837 "dma_device_type": 2 00:19:38.837 } 00:19:38.837 ], 00:19:38.837 "driver_specific": {} 00:19:38.837 } 00:19:38.837 ] 00:19:38.837 13:43:18 -- common/autotest_common.sh@895 -- # return 0 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.837 13:43:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.095 13:43:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.095 13:43:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.096 13:43:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.096 "name": "Existed_Raid", 00:19:39.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.096 "strip_size_kb": 0, 00:19:39.096 "state": "configuring", 00:19:39.096 "raid_level": "raid1", 00:19:39.096 "superblock": false, 00:19:39.096 "num_base_bdevs": 4, 00:19:39.096 "num_base_bdevs_discovered": 1, 00:19:39.096 "num_base_bdevs_operational": 4, 00:19:39.096 "base_bdevs_list": [ 00:19:39.096 { 00:19:39.096 "name": "BaseBdev1", 00:19:39.096 "uuid": "77b1a8ef-a455-42f5-9d17-11d013e964fe", 00:19:39.096 "is_configured": true, 00:19:39.096 "data_offset": 0, 00:19:39.096 "data_size": 65536 00:19:39.096 }, 00:19:39.096 { 00:19:39.096 "name": "BaseBdev2", 00:19:39.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.096 "is_configured": false, 00:19:39.096 "data_offset": 0, 00:19:39.096 "data_size": 0 00:19:39.096 }, 00:19:39.096 { 00:19:39.096 "name": "BaseBdev3", 00:19:39.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.096 "is_configured": false, 00:19:39.096 "data_offset": 0, 00:19:39.096 "data_size": 0 00:19:39.096 }, 00:19:39.096 { 00:19:39.096 "name": "BaseBdev4", 00:19:39.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.096 "is_configured": false, 00:19:39.096 "data_offset": 0, 00:19:39.096 "data_size": 0 00:19:39.096 } 00:19:39.096 ] 00:19:39.096 }' 00:19:39.096 13:43:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.096 13:43:18 -- common/autotest_common.sh@10 -- # set +x 00:19:39.661 13:43:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:39.919 [2024-07-10 13:43:19.115040] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.919 [2024-07-10 13:43:19.115157] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:39.919 13:43:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:39.919 13:43:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:39.919 [2024-07-10 13:43:19.266821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.919 [2024-07-10 13:43:19.268540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.919 [2024-07-10 13:43:19.268638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.919 [2024-07-10 13:43:19.268662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.919 [2024-07-10 13:43:19.268690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.919 [2024-07-10 13:43:19.268706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:39.919 [2024-07-10 13:43:19.268725] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.176 "name": "Existed_Raid", 00:19:40.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.176 "strip_size_kb": 0, 00:19:40.176 "state": "configuring", 00:19:40.176 "raid_level": "raid1", 00:19:40.176 "superblock": false, 00:19:40.176 "num_base_bdevs": 4, 00:19:40.176 "num_base_bdevs_discovered": 1, 00:19:40.176 "num_base_bdevs_operational": 4, 00:19:40.176 "base_bdevs_list": [ 00:19:40.176 { 00:19:40.176 "name": "BaseBdev1", 00:19:40.176 "uuid": "77b1a8ef-a455-42f5-9d17-11d013e964fe", 00:19:40.176 "is_configured": true, 00:19:40.176 "data_offset": 0, 00:19:40.176 "data_size": 65536 00:19:40.176 }, 00:19:40.176 { 00:19:40.176 "name": "BaseBdev2", 00:19:40.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.176 "is_configured": false, 00:19:40.176 "data_offset": 0, 00:19:40.176 "data_size": 0 00:19:40.176 }, 00:19:40.176 { 00:19:40.176 "name": "BaseBdev3", 00:19:40.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.176 "is_configured": false, 00:19:40.176 "data_offset": 0, 00:19:40.176 "data_size": 0 00:19:40.176 }, 00:19:40.176 { 00:19:40.176 "name": "BaseBdev4", 00:19:40.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.176 "is_configured": false, 00:19:40.176 "data_offset": 0, 00:19:40.176 "data_size": 0 00:19:40.176 } 00:19:40.176 ] 00:19:40.176 }' 00:19:40.176 13:43:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.176 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:19:40.743 13:43:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.000 [2024-07-10 13:43:20.230885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.000 BaseBdev2 00:19:41.000 13:43:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:41.000 13:43:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:41.000 13:43:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:41.000 13:43:20 -- common/autotest_common.sh@889 -- # local i 00:19:41.000 13:43:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:41.000 13:43:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:41.000 13:43:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.259 13:43:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:41.259 [ 00:19:41.259 { 00:19:41.259 "name": "BaseBdev2", 00:19:41.259 "aliases": [ 00:19:41.259 "753c3867-ac6e-479e-934e-2cd370b98446" 00:19:41.259 ], 00:19:41.259 "product_name": "Malloc disk", 00:19:41.259 "block_size": 512, 00:19:41.259 "num_blocks": 65536, 00:19:41.259 "uuid": "753c3867-ac6e-479e-934e-2cd370b98446", 00:19:41.259 "assigned_rate_limits": { 00:19:41.259 "rw_ios_per_sec": 0, 00:19:41.259 "rw_mbytes_per_sec": 0, 00:19:41.259 "r_mbytes_per_sec": 0, 00:19:41.259 "w_mbytes_per_sec": 0 00:19:41.259 }, 00:19:41.259 "claimed": true, 00:19:41.259 "claim_type": "exclusive_write", 00:19:41.259 "zoned": false, 00:19:41.259 "supported_io_types": { 00:19:41.259 "read": true, 00:19:41.259 "write": true, 00:19:41.259 "unmap": true, 00:19:41.259 "write_zeroes": true, 00:19:41.259 "flush": true, 00:19:41.259 "reset": true, 00:19:41.259 "compare": false, 00:19:41.259 "compare_and_write": false, 00:19:41.259 "abort": true, 00:19:41.259 "nvme_admin": false, 00:19:41.259 "nvme_io": false 00:19:41.259 }, 00:19:41.259 "memory_domains": [ 00:19:41.259 { 00:19:41.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.259 "dma_device_type": 2 00:19:41.259 } 00:19:41.259 ], 00:19:41.259 "driver_specific": {} 00:19:41.259 } 00:19:41.259 ] 00:19:41.259 13:43:20 -- common/autotest_common.sh@895 -- # return 0 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.259 13:43:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.517 13:43:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.517 "name": "Existed_Raid", 00:19:41.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.517 "strip_size_kb": 0, 00:19:41.517 "state": "configuring", 00:19:41.517 "raid_level": "raid1", 00:19:41.517 "superblock": false, 00:19:41.517 "num_base_bdevs": 4, 00:19:41.517 "num_base_bdevs_discovered": 2, 00:19:41.517 "num_base_bdevs_operational": 4, 00:19:41.517 "base_bdevs_list": [ 00:19:41.517 { 00:19:41.517 "name": "BaseBdev1", 00:19:41.517 "uuid": "77b1a8ef-a455-42f5-9d17-11d013e964fe", 00:19:41.517 "is_configured": true, 00:19:41.517 "data_offset": 0, 00:19:41.517 "data_size": 65536 00:19:41.517 }, 00:19:41.517 { 00:19:41.517 "name": "BaseBdev2", 00:19:41.517 "uuid": "753c3867-ac6e-479e-934e-2cd370b98446", 00:19:41.517 "is_configured": true, 00:19:41.517 "data_offset": 0, 00:19:41.517 "data_size": 65536 00:19:41.517 }, 00:19:41.517 { 00:19:41.517 "name": "BaseBdev3", 00:19:41.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.517 "is_configured": false, 00:19:41.517 "data_offset": 0, 00:19:41.517 "data_size": 0 00:19:41.517 }, 00:19:41.517 { 00:19:41.517 "name": "BaseBdev4", 00:19:41.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.517 "is_configured": false, 00:19:41.517 "data_offset": 0, 00:19:41.517 "data_size": 0 00:19:41.517 } 00:19:41.517 ] 00:19:41.517 }' 00:19:41.517 13:43:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.517 13:43:20 -- common/autotest_common.sh@10 -- # set +x 00:19:42.083 13:43:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:42.341 [2024-07-10 13:43:21.546028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:42.341 BaseBdev3 00:19:42.341 13:43:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:42.341 13:43:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:42.341 13:43:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:42.341 13:43:21 -- common/autotest_common.sh@889 -- # local i 00:19:42.341 13:43:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:42.341 13:43:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:42.341 13:43:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.600 13:43:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:42.600 [ 00:19:42.600 { 00:19:42.600 "name": "BaseBdev3", 00:19:42.600 "aliases": [ 00:19:42.600 "b8353228-d247-4f2f-b45c-98b1f5203175" 00:19:42.600 ], 00:19:42.600 "product_name": "Malloc disk", 00:19:42.600 "block_size": 512, 00:19:42.600 "num_blocks": 65536, 00:19:42.600 "uuid": "b8353228-d247-4f2f-b45c-98b1f5203175", 00:19:42.600 "assigned_rate_limits": { 00:19:42.600 "rw_ios_per_sec": 0, 00:19:42.600 "rw_mbytes_per_sec": 0, 00:19:42.600 "r_mbytes_per_sec": 0, 00:19:42.600 "w_mbytes_per_sec": 0 00:19:42.600 }, 00:19:42.600 "claimed": true, 00:19:42.600 "claim_type": "exclusive_write", 00:19:42.600 "zoned": false, 00:19:42.600 "supported_io_types": { 00:19:42.600 "read": true, 00:19:42.600 "write": true, 00:19:42.600 "unmap": true, 00:19:42.600 "write_zeroes": true, 00:19:42.600 "flush": true, 00:19:42.600 "reset": true, 00:19:42.600 "compare": false, 00:19:42.600 "compare_and_write": false, 00:19:42.600 "abort": true, 00:19:42.600 "nvme_admin": false, 00:19:42.600 "nvme_io": false 00:19:42.600 }, 00:19:42.600 "memory_domains": [ 00:19:42.600 { 00:19:42.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.600 "dma_device_type": 2 00:19:42.600 } 00:19:42.600 ], 00:19:42.600 "driver_specific": {} 00:19:42.600 } 00:19:42.600 ] 00:19:42.600 13:43:21 -- common/autotest_common.sh@895 -- # return 0 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.600 13:43:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.859 13:43:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.859 "name": "Existed_Raid", 00:19:42.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.859 "strip_size_kb": 0, 00:19:42.859 "state": "configuring", 00:19:42.859 "raid_level": "raid1", 00:19:42.859 "superblock": false, 00:19:42.859 "num_base_bdevs": 4, 00:19:42.859 "num_base_bdevs_discovered": 3, 00:19:42.859 "num_base_bdevs_operational": 4, 00:19:42.859 "base_bdevs_list": [ 00:19:42.859 { 00:19:42.859 "name": "BaseBdev1", 00:19:42.859 "uuid": "77b1a8ef-a455-42f5-9d17-11d013e964fe", 00:19:42.859 "is_configured": true, 00:19:42.859 "data_offset": 0, 00:19:42.859 "data_size": 65536 00:19:42.859 }, 00:19:42.859 { 00:19:42.859 "name": "BaseBdev2", 00:19:42.859 "uuid": "753c3867-ac6e-479e-934e-2cd370b98446", 00:19:42.859 "is_configured": true, 00:19:42.859 "data_offset": 0, 00:19:42.859 "data_size": 65536 00:19:42.859 }, 00:19:42.859 { 00:19:42.859 "name": "BaseBdev3", 00:19:42.859 "uuid": "b8353228-d247-4f2f-b45c-98b1f5203175", 00:19:42.859 "is_configured": true, 00:19:42.859 "data_offset": 0, 00:19:42.859 "data_size": 65536 00:19:42.859 }, 00:19:42.859 { 00:19:42.859 "name": "BaseBdev4", 00:19:42.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.859 "is_configured": false, 00:19:42.859 "data_offset": 0, 00:19:42.859 "data_size": 0 00:19:42.859 } 00:19:42.859 ] 00:19:42.859 }' 00:19:42.859 13:43:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.859 13:43:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.427 13:43:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:43.685 [2024-07-10 13:43:22.868583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:43.685 [2024-07-10 13:43:22.868705] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:43.685 [2024-07-10 13:43:22.868725] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:43.685 [2024-07-10 13:43:22.868860] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:43.685 [2024-07-10 13:43:22.869191] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:43.685 [2024-07-10 13:43:22.869234] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:43.685 [2024-07-10 13:43:22.869474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.685 BaseBdev4 00:19:43.685 13:43:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:43.685 13:43:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:43.685 13:43:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:43.685 13:43:22 -- common/autotest_common.sh@889 -- # local i 00:19:43.685 13:43:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:43.685 13:43:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:43.685 13:43:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.943 13:43:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:43.943 [ 00:19:43.943 { 00:19:43.943 "name": "BaseBdev4", 00:19:43.943 "aliases": [ 00:19:43.943 "1f2aa55e-31a3-4b0d-b6e8-a954f0159fc8" 00:19:43.943 ], 00:19:43.943 "product_name": "Malloc disk", 00:19:43.943 "block_size": 512, 00:19:43.943 "num_blocks": 65536, 00:19:43.943 "uuid": "1f2aa55e-31a3-4b0d-b6e8-a954f0159fc8", 00:19:43.943 "assigned_rate_limits": { 00:19:43.943 "rw_ios_per_sec": 0, 00:19:43.943 "rw_mbytes_per_sec": 0, 00:19:43.943 "r_mbytes_per_sec": 0, 00:19:43.943 "w_mbytes_per_sec": 0 00:19:43.943 }, 00:19:43.943 "claimed": true, 00:19:43.943 "claim_type": "exclusive_write", 00:19:43.943 "zoned": false, 00:19:43.943 "supported_io_types": { 00:19:43.943 "read": true, 00:19:43.943 "write": true, 00:19:43.943 "unmap": true, 00:19:43.943 "write_zeroes": true, 00:19:43.943 "flush": true, 00:19:43.943 "reset": true, 00:19:43.943 "compare": false, 00:19:43.943 "compare_and_write": false, 00:19:43.943 "abort": true, 00:19:43.943 "nvme_admin": false, 00:19:43.943 "nvme_io": false 00:19:43.943 }, 00:19:43.943 "memory_domains": [ 00:19:43.943 { 00:19:43.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.943 "dma_device_type": 2 00:19:43.943 } 00:19:43.943 ], 00:19:43.943 "driver_specific": {} 00:19:43.943 } 00:19:43.943 ] 00:19:43.943 13:43:23 -- common/autotest_common.sh@895 -- # return 0 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.943 13:43:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.203 13:43:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.203 "name": "Existed_Raid", 00:19:44.203 "uuid": "e08d6553-7fc2-49f5-9c62-714ee65854a7", 00:19:44.203 "strip_size_kb": 0, 00:19:44.203 "state": "online", 00:19:44.203 "raid_level": "raid1", 00:19:44.203 "superblock": false, 00:19:44.203 "num_base_bdevs": 4, 00:19:44.203 "num_base_bdevs_discovered": 4, 00:19:44.203 "num_base_bdevs_operational": 4, 00:19:44.203 "base_bdevs_list": [ 00:19:44.203 { 00:19:44.203 "name": "BaseBdev1", 00:19:44.203 "uuid": "77b1a8ef-a455-42f5-9d17-11d013e964fe", 00:19:44.203 "is_configured": true, 00:19:44.203 "data_offset": 0, 00:19:44.203 "data_size": 65536 00:19:44.203 }, 00:19:44.203 { 00:19:44.203 "name": "BaseBdev2", 00:19:44.203 "uuid": "753c3867-ac6e-479e-934e-2cd370b98446", 00:19:44.203 "is_configured": true, 00:19:44.203 "data_offset": 0, 00:19:44.203 "data_size": 65536 00:19:44.203 }, 00:19:44.203 { 00:19:44.203 "name": "BaseBdev3", 00:19:44.203 "uuid": "b8353228-d247-4f2f-b45c-98b1f5203175", 00:19:44.203 "is_configured": true, 00:19:44.203 "data_offset": 0, 00:19:44.203 "data_size": 65536 00:19:44.203 }, 00:19:44.203 { 00:19:44.203 "name": "BaseBdev4", 00:19:44.203 "uuid": "1f2aa55e-31a3-4b0d-b6e8-a954f0159fc8", 00:19:44.203 "is_configured": true, 00:19:44.203 "data_offset": 0, 00:19:44.203 "data_size": 65536 00:19:44.203 } 00:19:44.203 ] 00:19:44.203 }' 00:19:44.203 13:43:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.203 13:43:23 -- common/autotest_common.sh@10 -- # set +x 00:19:44.773 13:43:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:45.038 [2024-07-10 13:43:24.166517] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.038 13:43:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.296 13:43:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.296 "name": "Existed_Raid", 00:19:45.296 "uuid": "e08d6553-7fc2-49f5-9c62-714ee65854a7", 00:19:45.296 "strip_size_kb": 0, 00:19:45.296 "state": "online", 00:19:45.296 "raid_level": "raid1", 00:19:45.296 "superblock": false, 00:19:45.296 "num_base_bdevs": 4, 00:19:45.296 "num_base_bdevs_discovered": 3, 00:19:45.296 "num_base_bdevs_operational": 3, 00:19:45.296 "base_bdevs_list": [ 00:19:45.296 { 00:19:45.296 "name": null, 00:19:45.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.296 "is_configured": false, 00:19:45.296 "data_offset": 0, 00:19:45.296 "data_size": 65536 00:19:45.296 }, 00:19:45.296 { 00:19:45.296 "name": "BaseBdev2", 00:19:45.296 "uuid": "753c3867-ac6e-479e-934e-2cd370b98446", 00:19:45.296 "is_configured": true, 00:19:45.296 "data_offset": 0, 00:19:45.296 "data_size": 65536 00:19:45.296 }, 00:19:45.296 { 00:19:45.296 "name": "BaseBdev3", 00:19:45.296 "uuid": "b8353228-d247-4f2f-b45c-98b1f5203175", 00:19:45.296 "is_configured": true, 00:19:45.296 "data_offset": 0, 00:19:45.296 "data_size": 65536 00:19:45.296 }, 00:19:45.296 { 00:19:45.296 "name": "BaseBdev4", 00:19:45.296 "uuid": "1f2aa55e-31a3-4b0d-b6e8-a954f0159fc8", 00:19:45.296 "is_configured": true, 00:19:45.296 "data_offset": 0, 00:19:45.296 "data_size": 65536 00:19:45.296 } 00:19:45.296 ] 00:19:45.296 }' 00:19:45.296 13:43:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.296 13:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:45.863 13:43:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:45.863 13:43:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:45.863 13:43:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.863 13:43:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:46.121 13:43:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:46.121 13:43:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.121 13:43:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:46.121 [2024-07-10 13:43:25.381575] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.379 13:43:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:46.637 [2024-07-10 13:43:25.803921] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:46.637 13:43:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:46.637 13:43:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:46.637 13:43:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.637 13:43:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:46.896 13:43:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:46.896 13:43:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.896 13:43:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:47.154 [2024-07-10 13:43:26.251921] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:47.154 [2024-07-10 13:43:26.252013] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.154 [2024-07-10 13:43:26.252096] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.154 [2024-07-10 13:43:26.347966] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.154 [2024-07-10 13:43:26.348072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:47.154 13:43:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:47.154 13:43:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:47.154 13:43:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.154 13:43:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:47.412 13:43:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:47.412 13:43:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:47.412 13:43:26 -- bdev/bdev_raid.sh@287 -- # killprocess 124106 00:19:47.412 13:43:26 -- common/autotest_common.sh@926 -- # '[' -z 124106 ']' 00:19:47.412 13:43:26 -- common/autotest_common.sh@930 -- # kill -0 124106 00:19:47.412 13:43:26 -- common/autotest_common.sh@931 -- # uname 00:19:47.412 13:43:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:47.412 13:43:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124106 00:19:47.412 killing process with pid 124106 00:19:47.412 13:43:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:47.412 13:43:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:47.412 13:43:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124106' 00:19:47.412 13:43:26 -- common/autotest_common.sh@945 -- # kill 124106 00:19:47.412 13:43:26 -- common/autotest_common.sh@950 -- # wait 124106 00:19:47.412 [2024-07-10 13:43:26.569136] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:47.412 [2024-07-10 13:43:26.569245] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:48.788 ************************************ 00:19:48.788 END TEST raid_state_function_test 00:19:48.788 ************************************ 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:48.788 00:19:48.788 real 0m12.464s 00:19:48.788 user 0m21.673s 00:19:48.788 sys 0m1.572s 00:19:48.788 13:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.788 13:43:27 -- common/autotest_common.sh@10 -- # set +x 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:48.788 13:43:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:48.788 13:43:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:48.788 13:43:27 -- common/autotest_common.sh@10 -- # set +x 00:19:48.788 ************************************ 00:19:48.788 START TEST raid_state_function_test_sb 00:19:48.788 ************************************ 00:19:48.788 13:43:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=124556 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124556' 00:19:48.788 Process raid pid: 124556 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124556 /var/tmp/spdk-raid.sock 00:19:48.788 13:43:27 -- common/autotest_common.sh@819 -- # '[' -z 124556 ']' 00:19:48.788 13:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:48.788 13:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:48.788 13:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:48.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:48.788 13:43:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:48.788 13:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:48.788 13:43:27 -- common/autotest_common.sh@10 -- # set +x 00:19:48.788 [2024-07-10 13:43:27.967396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:48.788 [2024-07-10 13:43:27.967587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.788 [2024-07-10 13:43:28.125475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.046 [2024-07-10 13:43:28.315620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.304 [2024-07-10 13:43:28.514705] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:49.563 13:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:49.563 13:43:28 -- common/autotest_common.sh@852 -- # return 0 00:19:49.563 13:43:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:49.821 [2024-07-10 13:43:28.921992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:49.821 [2024-07-10 13:43:28.922122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:49.821 [2024-07-10 13:43:28.922163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:49.821 [2024-07-10 13:43:28.922188] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:49.821 [2024-07-10 13:43:28.922203] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:49.821 [2024-07-10 13:43:28.922239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:49.821 [2024-07-10 13:43:28.922255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:49.821 [2024-07-10 13:43:28.922281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.821 13:43:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.821 13:43:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.821 "name": "Existed_Raid", 00:19:49.821 "uuid": "2c0f8dcd-b866-493c-800f-53be46458a64", 00:19:49.821 "strip_size_kb": 0, 00:19:49.821 "state": "configuring", 00:19:49.821 "raid_level": "raid1", 00:19:49.822 "superblock": true, 00:19:49.822 "num_base_bdevs": 4, 00:19:49.822 "num_base_bdevs_discovered": 0, 00:19:49.822 "num_base_bdevs_operational": 4, 00:19:49.822 "base_bdevs_list": [ 00:19:49.822 { 00:19:49.822 "name": "BaseBdev1", 00:19:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.822 "is_configured": false, 00:19:49.822 "data_offset": 0, 00:19:49.822 "data_size": 0 00:19:49.822 }, 00:19:49.822 { 00:19:49.822 "name": "BaseBdev2", 00:19:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.822 "is_configured": false, 00:19:49.822 "data_offset": 0, 00:19:49.822 "data_size": 0 00:19:49.822 }, 00:19:49.822 { 00:19:49.822 "name": "BaseBdev3", 00:19:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.822 "is_configured": false, 00:19:49.822 "data_offset": 0, 00:19:49.822 "data_size": 0 00:19:49.822 }, 00:19:49.822 { 00:19:49.822 "name": "BaseBdev4", 00:19:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.822 "is_configured": false, 00:19:49.822 "data_offset": 0, 00:19:49.822 "data_size": 0 00:19:49.822 } 00:19:49.822 ] 00:19:49.822 }' 00:19:49.822 13:43:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.822 13:43:29 -- common/autotest_common.sh@10 -- # set +x 00:19:50.388 13:43:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:50.647 [2024-07-10 13:43:29.860219] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:50.647 [2024-07-10 13:43:29.860324] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:50.647 13:43:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:50.906 [2024-07-10 13:43:30.035993] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.906 [2024-07-10 13:43:30.036101] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.906 [2024-07-10 13:43:30.036141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.906 [2024-07-10 13:43:30.036179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.906 [2024-07-10 13:43:30.036197] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:50.906 [2024-07-10 13:43:30.036234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:50.906 [2024-07-10 13:43:30.036248] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:50.906 [2024-07-10 13:43:30.036274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:50.906 13:43:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:50.906 [2024-07-10 13:43:30.261150] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.164 BaseBdev1 00:19:51.164 13:43:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:51.164 13:43:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:51.164 13:43:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:51.164 13:43:30 -- common/autotest_common.sh@889 -- # local i 00:19:51.164 13:43:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:51.164 13:43:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:51.164 13:43:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.164 13:43:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:51.422 [ 00:19:51.422 { 00:19:51.422 "name": "BaseBdev1", 00:19:51.422 "aliases": [ 00:19:51.422 "1a3c135e-bbfd-4151-a13b-48a8e47274a9" 00:19:51.422 ], 00:19:51.422 "product_name": "Malloc disk", 00:19:51.422 "block_size": 512, 00:19:51.422 "num_blocks": 65536, 00:19:51.422 "uuid": "1a3c135e-bbfd-4151-a13b-48a8e47274a9", 00:19:51.422 "assigned_rate_limits": { 00:19:51.422 "rw_ios_per_sec": 0, 00:19:51.422 "rw_mbytes_per_sec": 0, 00:19:51.422 "r_mbytes_per_sec": 0, 00:19:51.422 "w_mbytes_per_sec": 0 00:19:51.422 }, 00:19:51.422 "claimed": true, 00:19:51.422 "claim_type": "exclusive_write", 00:19:51.422 "zoned": false, 00:19:51.422 "supported_io_types": { 00:19:51.422 "read": true, 00:19:51.422 "write": true, 00:19:51.422 "unmap": true, 00:19:51.422 "write_zeroes": true, 00:19:51.422 "flush": true, 00:19:51.422 "reset": true, 00:19:51.422 "compare": false, 00:19:51.422 "compare_and_write": false, 00:19:51.422 "abort": true, 00:19:51.422 "nvme_admin": false, 00:19:51.422 "nvme_io": false 00:19:51.422 }, 00:19:51.422 "memory_domains": [ 00:19:51.422 { 00:19:51.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.422 "dma_device_type": 2 00:19:51.422 } 00:19:51.422 ], 00:19:51.422 "driver_specific": {} 00:19:51.422 } 00:19:51.422 ] 00:19:51.422 13:43:30 -- common/autotest_common.sh@895 -- # return 0 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.422 13:43:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.680 13:43:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.680 "name": "Existed_Raid", 00:19:51.680 "uuid": "8ea9f1bd-903a-4dc8-9134-11e3ff1829a6", 00:19:51.680 "strip_size_kb": 0, 00:19:51.680 "state": "configuring", 00:19:51.680 "raid_level": "raid1", 00:19:51.680 "superblock": true, 00:19:51.680 "num_base_bdevs": 4, 00:19:51.680 "num_base_bdevs_discovered": 1, 00:19:51.680 "num_base_bdevs_operational": 4, 00:19:51.680 "base_bdevs_list": [ 00:19:51.680 { 00:19:51.680 "name": "BaseBdev1", 00:19:51.680 "uuid": "1a3c135e-bbfd-4151-a13b-48a8e47274a9", 00:19:51.680 "is_configured": true, 00:19:51.680 "data_offset": 2048, 00:19:51.680 "data_size": 63488 00:19:51.680 }, 00:19:51.680 { 00:19:51.680 "name": "BaseBdev2", 00:19:51.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.680 "is_configured": false, 00:19:51.680 "data_offset": 0, 00:19:51.680 "data_size": 0 00:19:51.680 }, 00:19:51.680 { 00:19:51.680 "name": "BaseBdev3", 00:19:51.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.680 "is_configured": false, 00:19:51.680 "data_offset": 0, 00:19:51.680 "data_size": 0 00:19:51.680 }, 00:19:51.680 { 00:19:51.680 "name": "BaseBdev4", 00:19:51.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.680 "is_configured": false, 00:19:51.680 "data_offset": 0, 00:19:51.680 "data_size": 0 00:19:51.680 } 00:19:51.680 ] 00:19:51.680 }' 00:19:51.680 13:43:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.680 13:43:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.246 13:43:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:52.504 [2024-07-10 13:43:31.615646] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.504 [2024-07-10 13:43:31.615760] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:52.504 13:43:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:52.504 13:43:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:52.762 13:43:31 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:52.762 BaseBdev1 00:19:52.762 13:43:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:52.762 13:43:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:52.762 13:43:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:52.762 13:43:32 -- common/autotest_common.sh@889 -- # local i 00:19:52.762 13:43:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:52.762 13:43:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:52.762 13:43:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:53.020 13:43:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:53.279 [ 00:19:53.279 { 00:19:53.279 "name": "BaseBdev1", 00:19:53.279 "aliases": [ 00:19:53.279 "23495dec-29f9-48e8-ba86-e604ac8957f9" 00:19:53.279 ], 00:19:53.279 "product_name": "Malloc disk", 00:19:53.279 "block_size": 512, 00:19:53.279 "num_blocks": 65536, 00:19:53.279 "uuid": "23495dec-29f9-48e8-ba86-e604ac8957f9", 00:19:53.279 "assigned_rate_limits": { 00:19:53.279 "rw_ios_per_sec": 0, 00:19:53.279 "rw_mbytes_per_sec": 0, 00:19:53.279 "r_mbytes_per_sec": 0, 00:19:53.279 "w_mbytes_per_sec": 0 00:19:53.279 }, 00:19:53.279 "claimed": false, 00:19:53.279 "zoned": false, 00:19:53.279 "supported_io_types": { 00:19:53.279 "read": true, 00:19:53.279 "write": true, 00:19:53.279 "unmap": true, 00:19:53.279 "write_zeroes": true, 00:19:53.279 "flush": true, 00:19:53.279 "reset": true, 00:19:53.279 "compare": false, 00:19:53.279 "compare_and_write": false, 00:19:53.279 "abort": true, 00:19:53.279 "nvme_admin": false, 00:19:53.279 "nvme_io": false 00:19:53.279 }, 00:19:53.279 "memory_domains": [ 00:19:53.279 { 00:19:53.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.279 "dma_device_type": 2 00:19:53.279 } 00:19:53.279 ], 00:19:53.279 "driver_specific": {} 00:19:53.279 } 00:19:53.279 ] 00:19:53.279 13:43:32 -- common/autotest_common.sh@895 -- # return 0 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:53.279 [2024-07-10 13:43:32.590477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.279 [2024-07-10 13:43:32.592054] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.279 [2024-07-10 13:43:32.592166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.279 [2024-07-10 13:43:32.592204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.279 [2024-07-10 13:43:32.592251] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.279 [2024-07-10 13:43:32.592276] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:53.279 [2024-07-10 13:43:32.592298] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.279 13:43:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.537 13:43:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.537 "name": "Existed_Raid", 00:19:53.537 "uuid": "475b28c3-ad92-454a-a3c5-d9afba1e863d", 00:19:53.537 "strip_size_kb": 0, 00:19:53.537 "state": "configuring", 00:19:53.537 "raid_level": "raid1", 00:19:53.537 "superblock": true, 00:19:53.537 "num_base_bdevs": 4, 00:19:53.537 "num_base_bdevs_discovered": 1, 00:19:53.537 "num_base_bdevs_operational": 4, 00:19:53.537 "base_bdevs_list": [ 00:19:53.537 { 00:19:53.537 "name": "BaseBdev1", 00:19:53.537 "uuid": "23495dec-29f9-48e8-ba86-e604ac8957f9", 00:19:53.537 "is_configured": true, 00:19:53.537 "data_offset": 2048, 00:19:53.537 "data_size": 63488 00:19:53.537 }, 00:19:53.537 { 00:19:53.537 "name": "BaseBdev2", 00:19:53.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.537 "is_configured": false, 00:19:53.537 "data_offset": 0, 00:19:53.537 "data_size": 0 00:19:53.537 }, 00:19:53.537 { 00:19:53.537 "name": "BaseBdev3", 00:19:53.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.537 "is_configured": false, 00:19:53.537 "data_offset": 0, 00:19:53.537 "data_size": 0 00:19:53.537 }, 00:19:53.537 { 00:19:53.537 "name": "BaseBdev4", 00:19:53.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.537 "is_configured": false, 00:19:53.537 "data_offset": 0, 00:19:53.537 "data_size": 0 00:19:53.537 } 00:19:53.537 ] 00:19:53.537 }' 00:19:53.537 13:43:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.537 13:43:32 -- common/autotest_common.sh@10 -- # set +x 00:19:54.104 13:43:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:54.362 [2024-07-10 13:43:33.598610] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.362 BaseBdev2 00:19:54.362 13:43:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:54.362 13:43:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:54.362 13:43:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:54.362 13:43:33 -- common/autotest_common.sh@889 -- # local i 00:19:54.362 13:43:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:54.362 13:43:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:54.362 13:43:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.620 13:43:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:54.878 [ 00:19:54.878 { 00:19:54.878 "name": "BaseBdev2", 00:19:54.878 "aliases": [ 00:19:54.878 "ef2bb534-40fd-47cd-b77e-4c6fa59c9939" 00:19:54.878 ], 00:19:54.878 "product_name": "Malloc disk", 00:19:54.878 "block_size": 512, 00:19:54.878 "num_blocks": 65536, 00:19:54.878 "uuid": "ef2bb534-40fd-47cd-b77e-4c6fa59c9939", 00:19:54.878 "assigned_rate_limits": { 00:19:54.878 "rw_ios_per_sec": 0, 00:19:54.878 "rw_mbytes_per_sec": 0, 00:19:54.878 "r_mbytes_per_sec": 0, 00:19:54.878 "w_mbytes_per_sec": 0 00:19:54.878 }, 00:19:54.878 "claimed": true, 00:19:54.878 "claim_type": "exclusive_write", 00:19:54.878 "zoned": false, 00:19:54.878 "supported_io_types": { 00:19:54.878 "read": true, 00:19:54.878 "write": true, 00:19:54.878 "unmap": true, 00:19:54.878 "write_zeroes": true, 00:19:54.878 "flush": true, 00:19:54.878 "reset": true, 00:19:54.878 "compare": false, 00:19:54.878 "compare_and_write": false, 00:19:54.878 "abort": true, 00:19:54.878 "nvme_admin": false, 00:19:54.878 "nvme_io": false 00:19:54.878 }, 00:19:54.878 "memory_domains": [ 00:19:54.878 { 00:19:54.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.878 "dma_device_type": 2 00:19:54.878 } 00:19:54.878 ], 00:19:54.878 "driver_specific": {} 00:19:54.878 } 00:19:54.878 ] 00:19:54.878 13:43:33 -- common/autotest_common.sh@895 -- # return 0 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.878 13:43:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.878 13:43:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.878 "name": "Existed_Raid", 00:19:54.878 "uuid": "475b28c3-ad92-454a-a3c5-d9afba1e863d", 00:19:54.878 "strip_size_kb": 0, 00:19:54.878 "state": "configuring", 00:19:54.878 "raid_level": "raid1", 00:19:54.878 "superblock": true, 00:19:54.878 "num_base_bdevs": 4, 00:19:54.878 "num_base_bdevs_discovered": 2, 00:19:54.878 "num_base_bdevs_operational": 4, 00:19:54.878 "base_bdevs_list": [ 00:19:54.878 { 00:19:54.878 "name": "BaseBdev1", 00:19:54.878 "uuid": "23495dec-29f9-48e8-ba86-e604ac8957f9", 00:19:54.878 "is_configured": true, 00:19:54.878 "data_offset": 2048, 00:19:54.878 "data_size": 63488 00:19:54.878 }, 00:19:54.878 { 00:19:54.878 "name": "BaseBdev2", 00:19:54.878 "uuid": "ef2bb534-40fd-47cd-b77e-4c6fa59c9939", 00:19:54.878 "is_configured": true, 00:19:54.878 "data_offset": 2048, 00:19:54.878 "data_size": 63488 00:19:54.878 }, 00:19:54.878 { 00:19:54.878 "name": "BaseBdev3", 00:19:54.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.878 "is_configured": false, 00:19:54.878 "data_offset": 0, 00:19:54.878 "data_size": 0 00:19:54.878 }, 00:19:54.878 { 00:19:54.878 "name": "BaseBdev4", 00:19:54.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.878 "is_configured": false, 00:19:54.878 "data_offset": 0, 00:19:54.878 "data_size": 0 00:19:54.878 } 00:19:54.878 ] 00:19:54.878 }' 00:19:54.878 13:43:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.878 13:43:34 -- common/autotest_common.sh@10 -- # set +x 00:19:55.443 13:43:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:55.701 [2024-07-10 13:43:34.928359] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.701 BaseBdev3 00:19:55.701 13:43:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:55.701 13:43:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:55.701 13:43:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:55.701 13:43:34 -- common/autotest_common.sh@889 -- # local i 00:19:55.701 13:43:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:55.701 13:43:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:55.701 13:43:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.959 13:43:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:55.959 [ 00:19:55.959 { 00:19:55.959 "name": "BaseBdev3", 00:19:55.959 "aliases": [ 00:19:55.959 "c48ff451-b244-4827-b847-43871b9224f7" 00:19:55.959 ], 00:19:55.959 "product_name": "Malloc disk", 00:19:55.959 "block_size": 512, 00:19:55.959 "num_blocks": 65536, 00:19:55.959 "uuid": "c48ff451-b244-4827-b847-43871b9224f7", 00:19:55.959 "assigned_rate_limits": { 00:19:55.959 "rw_ios_per_sec": 0, 00:19:55.959 "rw_mbytes_per_sec": 0, 00:19:55.959 "r_mbytes_per_sec": 0, 00:19:55.959 "w_mbytes_per_sec": 0 00:19:55.959 }, 00:19:55.959 "claimed": true, 00:19:55.959 "claim_type": "exclusive_write", 00:19:55.959 "zoned": false, 00:19:55.959 "supported_io_types": { 00:19:55.959 "read": true, 00:19:55.959 "write": true, 00:19:55.959 "unmap": true, 00:19:55.959 "write_zeroes": true, 00:19:55.959 "flush": true, 00:19:55.959 "reset": true, 00:19:55.959 "compare": false, 00:19:55.959 "compare_and_write": false, 00:19:55.959 "abort": true, 00:19:55.959 "nvme_admin": false, 00:19:55.959 "nvme_io": false 00:19:55.959 }, 00:19:55.959 "memory_domains": [ 00:19:55.959 { 00:19:55.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.959 "dma_device_type": 2 00:19:55.959 } 00:19:55.959 ], 00:19:55.959 "driver_specific": {} 00:19:55.959 } 00:19:55.959 ] 00:19:55.959 13:43:35 -- common/autotest_common.sh@895 -- # return 0 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.959 13:43:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.217 13:43:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.217 "name": "Existed_Raid", 00:19:56.217 "uuid": "475b28c3-ad92-454a-a3c5-d9afba1e863d", 00:19:56.217 "strip_size_kb": 0, 00:19:56.217 "state": "configuring", 00:19:56.217 "raid_level": "raid1", 00:19:56.217 "superblock": true, 00:19:56.217 "num_base_bdevs": 4, 00:19:56.217 "num_base_bdevs_discovered": 3, 00:19:56.217 "num_base_bdevs_operational": 4, 00:19:56.217 "base_bdevs_list": [ 00:19:56.217 { 00:19:56.217 "name": "BaseBdev1", 00:19:56.217 "uuid": "23495dec-29f9-48e8-ba86-e604ac8957f9", 00:19:56.217 "is_configured": true, 00:19:56.217 "data_offset": 2048, 00:19:56.217 "data_size": 63488 00:19:56.217 }, 00:19:56.217 { 00:19:56.217 "name": "BaseBdev2", 00:19:56.217 "uuid": "ef2bb534-40fd-47cd-b77e-4c6fa59c9939", 00:19:56.217 "is_configured": true, 00:19:56.217 "data_offset": 2048, 00:19:56.217 "data_size": 63488 00:19:56.217 }, 00:19:56.217 { 00:19:56.217 "name": "BaseBdev3", 00:19:56.217 "uuid": "c48ff451-b244-4827-b847-43871b9224f7", 00:19:56.217 "is_configured": true, 00:19:56.217 "data_offset": 2048, 00:19:56.217 "data_size": 63488 00:19:56.217 }, 00:19:56.217 { 00:19:56.217 "name": "BaseBdev4", 00:19:56.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.217 "is_configured": false, 00:19:56.217 "data_offset": 0, 00:19:56.217 "data_size": 0 00:19:56.217 } 00:19:56.217 ] 00:19:56.217 }' 00:19:56.217 13:43:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.217 13:43:35 -- common/autotest_common.sh@10 -- # set +x 00:19:56.782 13:43:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:57.041 [2024-07-10 13:43:36.240248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:57.041 [2024-07-10 13:43:36.240539] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:57.041 [2024-07-10 13:43:36.240588] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:57.041 [2024-07-10 13:43:36.240747] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:57.041 BaseBdev4 00:19:57.041 [2024-07-10 13:43:36.241068] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:57.041 [2024-07-10 13:43:36.241079] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:57.041 [2024-07-10 13:43:36.241221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.041 13:43:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:57.041 13:43:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:57.041 13:43:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:57.041 13:43:36 -- common/autotest_common.sh@889 -- # local i 00:19:57.041 13:43:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:57.041 13:43:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:57.041 13:43:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.318 13:43:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:57.318 [ 00:19:57.318 { 00:19:57.318 "name": "BaseBdev4", 00:19:57.318 "aliases": [ 00:19:57.318 "d9bddaa1-b70f-42f6-b103-9ab96c057498" 00:19:57.318 ], 00:19:57.318 "product_name": "Malloc disk", 00:19:57.318 "block_size": 512, 00:19:57.318 "num_blocks": 65536, 00:19:57.318 "uuid": "d9bddaa1-b70f-42f6-b103-9ab96c057498", 00:19:57.318 "assigned_rate_limits": { 00:19:57.318 "rw_ios_per_sec": 0, 00:19:57.318 "rw_mbytes_per_sec": 0, 00:19:57.318 "r_mbytes_per_sec": 0, 00:19:57.318 "w_mbytes_per_sec": 0 00:19:57.318 }, 00:19:57.318 "claimed": true, 00:19:57.318 "claim_type": "exclusive_write", 00:19:57.318 "zoned": false, 00:19:57.318 "supported_io_types": { 00:19:57.318 "read": true, 00:19:57.318 "write": true, 00:19:57.318 "unmap": true, 00:19:57.318 "write_zeroes": true, 00:19:57.318 "flush": true, 00:19:57.318 "reset": true, 00:19:57.318 "compare": false, 00:19:57.318 "compare_and_write": false, 00:19:57.318 "abort": true, 00:19:57.318 "nvme_admin": false, 00:19:57.318 "nvme_io": false 00:19:57.318 }, 00:19:57.318 "memory_domains": [ 00:19:57.318 { 00:19:57.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.318 "dma_device_type": 2 00:19:57.318 } 00:19:57.318 ], 00:19:57.318 "driver_specific": {} 00:19:57.318 } 00:19:57.318 ] 00:19:57.318 13:43:36 -- common/autotest_common.sh@895 -- # return 0 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.318 13:43:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.577 13:43:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.577 "name": "Existed_Raid", 00:19:57.577 "uuid": "475b28c3-ad92-454a-a3c5-d9afba1e863d", 00:19:57.577 "strip_size_kb": 0, 00:19:57.577 "state": "online", 00:19:57.577 "raid_level": "raid1", 00:19:57.577 "superblock": true, 00:19:57.577 "num_base_bdevs": 4, 00:19:57.577 "num_base_bdevs_discovered": 4, 00:19:57.577 "num_base_bdevs_operational": 4, 00:19:57.577 "base_bdevs_list": [ 00:19:57.577 { 00:19:57.577 "name": "BaseBdev1", 00:19:57.577 "uuid": "23495dec-29f9-48e8-ba86-e604ac8957f9", 00:19:57.577 "is_configured": true, 00:19:57.577 "data_offset": 2048, 00:19:57.577 "data_size": 63488 00:19:57.577 }, 00:19:57.577 { 00:19:57.577 "name": "BaseBdev2", 00:19:57.577 "uuid": "ef2bb534-40fd-47cd-b77e-4c6fa59c9939", 00:19:57.577 "is_configured": true, 00:19:57.577 "data_offset": 2048, 00:19:57.577 "data_size": 63488 00:19:57.577 }, 00:19:57.577 { 00:19:57.577 "name": "BaseBdev3", 00:19:57.577 "uuid": "c48ff451-b244-4827-b847-43871b9224f7", 00:19:57.577 "is_configured": true, 00:19:57.577 "data_offset": 2048, 00:19:57.577 "data_size": 63488 00:19:57.577 }, 00:19:57.577 { 00:19:57.577 "name": "BaseBdev4", 00:19:57.577 "uuid": "d9bddaa1-b70f-42f6-b103-9ab96c057498", 00:19:57.577 "is_configured": true, 00:19:57.577 "data_offset": 2048, 00:19:57.577 "data_size": 63488 00:19:57.577 } 00:19:57.577 ] 00:19:57.577 }' 00:19:57.577 13:43:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.577 13:43:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.143 13:43:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:58.402 [2024-07-10 13:43:37.566058] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.402 13:43:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.661 13:43:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.661 "name": "Existed_Raid", 00:19:58.661 "uuid": "475b28c3-ad92-454a-a3c5-d9afba1e863d", 00:19:58.661 "strip_size_kb": 0, 00:19:58.661 "state": "online", 00:19:58.661 "raid_level": "raid1", 00:19:58.661 "superblock": true, 00:19:58.661 "num_base_bdevs": 4, 00:19:58.661 "num_base_bdevs_discovered": 3, 00:19:58.661 "num_base_bdevs_operational": 3, 00:19:58.661 "base_bdevs_list": [ 00:19:58.661 { 00:19:58.661 "name": null, 00:19:58.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.661 "is_configured": false, 00:19:58.661 "data_offset": 2048, 00:19:58.661 "data_size": 63488 00:19:58.661 }, 00:19:58.661 { 00:19:58.661 "name": "BaseBdev2", 00:19:58.661 "uuid": "ef2bb534-40fd-47cd-b77e-4c6fa59c9939", 00:19:58.661 "is_configured": true, 00:19:58.661 "data_offset": 2048, 00:19:58.661 "data_size": 63488 00:19:58.661 }, 00:19:58.661 { 00:19:58.661 "name": "BaseBdev3", 00:19:58.661 "uuid": "c48ff451-b244-4827-b847-43871b9224f7", 00:19:58.661 "is_configured": true, 00:19:58.661 "data_offset": 2048, 00:19:58.661 "data_size": 63488 00:19:58.661 }, 00:19:58.661 { 00:19:58.661 "name": "BaseBdev4", 00:19:58.661 "uuid": "d9bddaa1-b70f-42f6-b103-9ab96c057498", 00:19:58.661 "is_configured": true, 00:19:58.661 "data_offset": 2048, 00:19:58.661 "data_size": 63488 00:19:58.661 } 00:19:58.661 ] 00:19:58.661 }' 00:19:58.661 13:43:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.661 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:19:59.228 13:43:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:59.228 13:43:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:59.228 13:43:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.228 13:43:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:59.487 13:43:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:59.487 13:43:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:59.487 13:43:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:59.487 [2024-07-10 13:43:38.822783] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:59.745 13:43:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:59.745 13:43:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:59.745 13:43:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.745 13:43:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:00.003 13:43:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:00.003 13:43:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.003 13:43:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:00.003 [2024-07-10 13:43:39.292046] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:00.261 13:43:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:00.518 [2024-07-10 13:43:39.759167] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:00.518 [2024-07-10 13:43:39.759258] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.518 [2024-07-10 13:43:39.759325] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.518 [2024-07-10 13:43:39.852525] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.518 [2024-07-10 13:43:39.852630] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:00.518 13:43:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:00.518 13:43:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:00.518 13:43:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.519 13:43:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:00.776 13:43:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:00.776 13:43:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:00.776 13:43:40 -- bdev/bdev_raid.sh@287 -- # killprocess 124556 00:20:00.776 13:43:40 -- common/autotest_common.sh@926 -- # '[' -z 124556 ']' 00:20:00.776 13:43:40 -- common/autotest_common.sh@930 -- # kill -0 124556 00:20:00.776 13:43:40 -- common/autotest_common.sh@931 -- # uname 00:20:00.776 13:43:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:00.776 13:43:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124556 00:20:00.776 killing process with pid 124556 00:20:00.776 13:43:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:00.776 13:43:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:00.776 13:43:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124556' 00:20:00.776 13:43:40 -- common/autotest_common.sh@945 -- # kill 124556 00:20:00.776 13:43:40 -- common/autotest_common.sh@950 -- # wait 124556 00:20:00.776 [2024-07-10 13:43:40.070975] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.776 [2024-07-10 13:43:40.071087] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.167 ************************************ 00:20:02.167 END TEST raid_state_function_test_sb 00:20:02.167 ************************************ 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:02.167 00:20:02.167 real 0m13.403s 00:20:02.167 user 0m23.388s 00:20:02.167 sys 0m1.607s 00:20:02.167 13:43:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.167 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:02.167 13:43:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:02.167 13:43:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.167 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:20:02.167 ************************************ 00:20:02.167 START TEST raid_superblock_test 00:20:02.167 ************************************ 00:20:02.167 13:43:41 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@357 -- # raid_pid=125011 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125011 /var/tmp/spdk-raid.sock 00:20:02.167 13:43:41 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:02.167 13:43:41 -- common/autotest_common.sh@819 -- # '[' -z 125011 ']' 00:20:02.167 13:43:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.167 13:43:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.167 13:43:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.167 13:43:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.167 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:20:02.167 [2024-07-10 13:43:41.445719] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:02.167 [2024-07-10 13:43:41.445974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125011 ] 00:20:02.424 [2024-07-10 13:43:41.588614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.681 [2024-07-10 13:43:41.785775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.681 [2024-07-10 13:43:41.976765] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.938 13:43:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:02.938 13:43:42 -- common/autotest_common.sh@852 -- # return 0 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.938 13:43:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:03.194 malloc1 00:20:03.194 13:43:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:03.453 [2024-07-10 13:43:42.596950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:03.453 [2024-07-10 13:43:42.597091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.453 [2024-07-10 13:43:42.597131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:03.453 [2024-07-10 13:43:42.597181] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.453 [2024-07-10 13:43:42.598924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.453 [2024-07-10 13:43:42.598997] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:03.453 pt1 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:03.453 13:43:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:03.712 malloc2 00:20:03.712 13:43:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.712 [2024-07-10 13:43:43.003750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.712 [2024-07-10 13:43:43.003869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.712 [2024-07-10 13:43:43.003935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:03.712 [2024-07-10 13:43:43.003995] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.712 [2024-07-10 13:43:43.005846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.712 [2024-07-10 13:43:43.005918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.712 pt2 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:03.712 13:43:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:03.970 malloc3 00:20:03.970 13:43:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:04.227 [2024-07-10 13:43:43.407008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:04.227 [2024-07-10 13:43:43.407122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.227 [2024-07-10 13:43:43.407168] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:04.227 [2024-07-10 13:43:43.407215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.227 [2024-07-10 13:43:43.409081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.227 [2024-07-10 13:43:43.409156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:04.227 pt3 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:04.227 13:43:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:04.486 malloc4 00:20:04.486 13:43:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:04.486 [2024-07-10 13:43:43.783555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:04.486 [2024-07-10 13:43:43.783710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.486 [2024-07-10 13:43:43.783760] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:04.486 [2024-07-10 13:43:43.783811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.486 [2024-07-10 13:43:43.785638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.486 [2024-07-10 13:43:43.785712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:04.486 pt4 00:20:04.486 13:43:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:04.486 13:43:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:04.486 13:43:43 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:04.749 [2024-07-10 13:43:43.959326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:04.749 [2024-07-10 13:43:43.960936] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.749 [2024-07-10 13:43:43.961026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:04.749 [2024-07-10 13:43:43.961080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:04.749 [2024-07-10 13:43:43.961295] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:04.749 [2024-07-10 13:43:43.961332] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:04.749 [2024-07-10 13:43:43.961509] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:04.749 [2024-07-10 13:43:43.961836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:04.749 [2024-07-10 13:43:43.961875] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:04.749 [2024-07-10 13:43:43.962045] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.749 13:43:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.750 13:43:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.750 13:43:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.007 13:43:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.007 "name": "raid_bdev1", 00:20:05.007 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:05.007 "strip_size_kb": 0, 00:20:05.007 "state": "online", 00:20:05.007 "raid_level": "raid1", 00:20:05.007 "superblock": true, 00:20:05.007 "num_base_bdevs": 4, 00:20:05.007 "num_base_bdevs_discovered": 4, 00:20:05.007 "num_base_bdevs_operational": 4, 00:20:05.007 "base_bdevs_list": [ 00:20:05.007 { 00:20:05.007 "name": "pt1", 00:20:05.007 "uuid": "df2c58ac-65ef-5822-8595-aada1f1319e5", 00:20:05.007 "is_configured": true, 00:20:05.007 "data_offset": 2048, 00:20:05.007 "data_size": 63488 00:20:05.007 }, 00:20:05.007 { 00:20:05.007 "name": "pt2", 00:20:05.007 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:05.007 "is_configured": true, 00:20:05.007 "data_offset": 2048, 00:20:05.007 "data_size": 63488 00:20:05.007 }, 00:20:05.007 { 00:20:05.007 "name": "pt3", 00:20:05.007 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:05.007 "is_configured": true, 00:20:05.007 "data_offset": 2048, 00:20:05.007 "data_size": 63488 00:20:05.007 }, 00:20:05.007 { 00:20:05.007 "name": "pt4", 00:20:05.007 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:05.007 "is_configured": true, 00:20:05.007 "data_offset": 2048, 00:20:05.007 "data_size": 63488 00:20:05.007 } 00:20:05.007 ] 00:20:05.007 }' 00:20:05.007 13:43:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.007 13:43:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.574 13:43:44 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:05.574 13:43:44 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:05.574 [2024-07-10 13:43:44.897836] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.574 13:43:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=536eb87f-b248-4456-856d-da1437216d6b 00:20:05.574 13:43:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 536eb87f-b248-4456-856d-da1437216d6b ']' 00:20:05.574 13:43:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.832 [2024-07-10 13:43:45.073320] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.832 [2024-07-10 13:43:45.073403] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.832 [2024-07-10 13:43:45.073511] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.832 [2024-07-10 13:43:45.073599] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.832 [2024-07-10 13:43:45.073617] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:05.832 13:43:45 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.832 13:43:45 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:06.090 13:43:45 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:06.090 13:43:45 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:06.090 13:43:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.090 13:43:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:06.090 13:43:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.091 13:43:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:06.349 13:43:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.349 13:43:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:06.607 13:43:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.607 13:43:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:06.865 13:43:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:06.865 13:43:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:06.865 13:43:46 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:06.865 13:43:46 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:06.865 13:43:46 -- common/autotest_common.sh@640 -- # local es=0 00:20:06.865 13:43:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:06.865 13:43:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.865 13:43:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.865 13:43:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.865 13:43:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.865 13:43:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.865 13:43:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:06.865 13:43:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.865 13:43:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:06.865 13:43:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:07.122 [2024-07-10 13:43:46.315131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:07.122 [2024-07-10 13:43:46.316838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:07.122 [2024-07-10 13:43:46.316920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:07.122 [2024-07-10 13:43:46.316967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:07.122 [2024-07-10 13:43:46.317047] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:07.122 [2024-07-10 13:43:46.317133] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:07.122 [2024-07-10 13:43:46.317190] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:07.122 [2024-07-10 13:43:46.317260] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:07.122 [2024-07-10 13:43:46.317304] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.122 [2024-07-10 13:43:46.317324] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:20:07.122 request: 00:20:07.122 { 00:20:07.122 "name": "raid_bdev1", 00:20:07.122 "raid_level": "raid1", 00:20:07.122 "base_bdevs": [ 00:20:07.122 "malloc1", 00:20:07.122 "malloc2", 00:20:07.122 "malloc3", 00:20:07.122 "malloc4" 00:20:07.122 ], 00:20:07.122 "superblock": false, 00:20:07.122 "method": "bdev_raid_create", 00:20:07.122 "req_id": 1 00:20:07.122 } 00:20:07.122 Got JSON-RPC error response 00:20:07.122 response: 00:20:07.122 { 00:20:07.122 "code": -17, 00:20:07.122 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:07.122 } 00:20:07.122 13:43:46 -- common/autotest_common.sh@643 -- # es=1 00:20:07.122 13:43:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.123 13:43:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.123 13:43:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.123 13:43:46 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.123 13:43:46 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.381 [2024-07-10 13:43:46.686441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.381 [2024-07-10 13:43:46.686581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.381 [2024-07-10 13:43:46.686622] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:07.381 [2024-07-10 13:43:46.686662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.381 [2024-07-10 13:43:46.688452] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.381 [2024-07-10 13:43:46.688556] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.381 [2024-07-10 13:43:46.688692] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:07.381 [2024-07-10 13:43:46.688763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.381 pt1 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.381 13:43:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.638 13:43:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.638 "name": "raid_bdev1", 00:20:07.638 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:07.638 "strip_size_kb": 0, 00:20:07.638 "state": "configuring", 00:20:07.638 "raid_level": "raid1", 00:20:07.638 "superblock": true, 00:20:07.638 "num_base_bdevs": 4, 00:20:07.638 "num_base_bdevs_discovered": 1, 00:20:07.638 "num_base_bdevs_operational": 4, 00:20:07.638 "base_bdevs_list": [ 00:20:07.638 { 00:20:07.638 "name": "pt1", 00:20:07.638 "uuid": "df2c58ac-65ef-5822-8595-aada1f1319e5", 00:20:07.638 "is_configured": true, 00:20:07.638 "data_offset": 2048, 00:20:07.638 "data_size": 63488 00:20:07.638 }, 00:20:07.638 { 00:20:07.638 "name": null, 00:20:07.638 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:07.638 "is_configured": false, 00:20:07.638 "data_offset": 2048, 00:20:07.638 "data_size": 63488 00:20:07.638 }, 00:20:07.638 { 00:20:07.638 "name": null, 00:20:07.638 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:07.638 "is_configured": false, 00:20:07.638 "data_offset": 2048, 00:20:07.638 "data_size": 63488 00:20:07.638 }, 00:20:07.639 { 00:20:07.639 "name": null, 00:20:07.639 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:07.639 "is_configured": false, 00:20:07.639 "data_offset": 2048, 00:20:07.639 "data_size": 63488 00:20:07.639 } 00:20:07.639 ] 00:20:07.639 }' 00:20:07.639 13:43:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.639 13:43:46 -- common/autotest_common.sh@10 -- # set +x 00:20:08.204 13:43:47 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:08.204 13:43:47 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.462 [2024-07-10 13:43:47.640840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.462 [2024-07-10 13:43:47.640958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.462 [2024-07-10 13:43:47.641022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:08.462 [2024-07-10 13:43:47.641060] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.462 [2024-07-10 13:43:47.641523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.462 [2024-07-10 13:43:47.641591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.462 [2024-07-10 13:43:47.641722] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:08.462 [2024-07-10 13:43:47.641771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.462 pt2 00:20:08.462 13:43:47 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:08.720 [2024-07-10 13:43:47.828520] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.720 13:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.720 13:43:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.720 "name": "raid_bdev1", 00:20:08.720 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:08.720 "strip_size_kb": 0, 00:20:08.720 "state": "configuring", 00:20:08.720 "raid_level": "raid1", 00:20:08.720 "superblock": true, 00:20:08.720 "num_base_bdevs": 4, 00:20:08.720 "num_base_bdevs_discovered": 1, 00:20:08.720 "num_base_bdevs_operational": 4, 00:20:08.720 "base_bdevs_list": [ 00:20:08.720 { 00:20:08.720 "name": "pt1", 00:20:08.720 "uuid": "df2c58ac-65ef-5822-8595-aada1f1319e5", 00:20:08.720 "is_configured": true, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 }, 00:20:08.720 { 00:20:08.720 "name": null, 00:20:08.720 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:08.720 "is_configured": false, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 }, 00:20:08.720 { 00:20:08.720 "name": null, 00:20:08.720 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:08.720 "is_configured": false, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 }, 00:20:08.720 { 00:20:08.720 "name": null, 00:20:08.720 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:08.720 "is_configured": false, 00:20:08.720 "data_offset": 2048, 00:20:08.720 "data_size": 63488 00:20:08.720 } 00:20:08.720 ] 00:20:08.720 }' 00:20:08.720 13:43:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.720 13:43:48 -- common/autotest_common.sh@10 -- # set +x 00:20:09.286 13:43:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:09.286 13:43:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.286 13:43:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.544 [2024-07-10 13:43:48.750928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.544 [2024-07-10 13:43:48.751048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.544 [2024-07-10 13:43:48.751112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:09.544 [2024-07-10 13:43:48.751147] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.544 [2024-07-10 13:43:48.751581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.544 [2024-07-10 13:43:48.751659] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.544 [2024-07-10 13:43:48.751784] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:09.544 [2024-07-10 13:43:48.751826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.544 pt2 00:20:09.544 13:43:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:09.544 13:43:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.544 13:43:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:09.803 [2024-07-10 13:43:48.926611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:09.803 [2024-07-10 13:43:48.926728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.803 [2024-07-10 13:43:48.926771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:09.803 [2024-07-10 13:43:48.926809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.803 [2024-07-10 13:43:48.927196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.803 [2024-07-10 13:43:48.927271] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:09.803 [2024-07-10 13:43:48.927393] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:09.803 [2024-07-10 13:43:48.927432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:09.803 pt3 00:20:09.803 13:43:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:09.803 13:43:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.803 13:43:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:09.803 [2024-07-10 13:43:49.110299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:09.803 [2024-07-10 13:43:49.110415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.803 [2024-07-10 13:43:49.110452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:09.803 [2024-07-10 13:43:49.110486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.803 [2024-07-10 13:43:49.110857] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.803 [2024-07-10 13:43:49.110925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:09.803 [2024-07-10 13:43:49.111037] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:09.803 [2024-07-10 13:43:49.111078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:09.803 [2024-07-10 13:43:49.111212] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:09.803 [2024-07-10 13:43:49.111241] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:09.803 [2024-07-10 13:43:49.111351] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:09.803 [2024-07-10 13:43:49.111625] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:09.803 [2024-07-10 13:43:49.111663] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:09.803 [2024-07-10 13:43:49.111813] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.803 pt4 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.803 13:43:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.804 13:43:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.804 13:43:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.061 13:43:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.061 "name": "raid_bdev1", 00:20:10.061 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:10.061 "strip_size_kb": 0, 00:20:10.061 "state": "online", 00:20:10.061 "raid_level": "raid1", 00:20:10.061 "superblock": true, 00:20:10.061 "num_base_bdevs": 4, 00:20:10.061 "num_base_bdevs_discovered": 4, 00:20:10.061 "num_base_bdevs_operational": 4, 00:20:10.061 "base_bdevs_list": [ 00:20:10.061 { 00:20:10.061 "name": "pt1", 00:20:10.061 "uuid": "df2c58ac-65ef-5822-8595-aada1f1319e5", 00:20:10.061 "is_configured": true, 00:20:10.061 "data_offset": 2048, 00:20:10.061 "data_size": 63488 00:20:10.061 }, 00:20:10.061 { 00:20:10.061 "name": "pt2", 00:20:10.061 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:10.061 "is_configured": true, 00:20:10.061 "data_offset": 2048, 00:20:10.061 "data_size": 63488 00:20:10.061 }, 00:20:10.061 { 00:20:10.061 "name": "pt3", 00:20:10.061 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:10.061 "is_configured": true, 00:20:10.061 "data_offset": 2048, 00:20:10.061 "data_size": 63488 00:20:10.061 }, 00:20:10.061 { 00:20:10.061 "name": "pt4", 00:20:10.061 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:10.061 "is_configured": true, 00:20:10.061 "data_offset": 2048, 00:20:10.061 "data_size": 63488 00:20:10.061 } 00:20:10.061 ] 00:20:10.061 }' 00:20:10.061 13:43:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.061 13:43:49 -- common/autotest_common.sh@10 -- # set +x 00:20:10.628 13:43:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:10.628 13:43:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:10.887 [2024-07-10 13:43:50.048898] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@430 -- # '[' 536eb87f-b248-4456-856d-da1437216d6b '!=' 536eb87f-b248-4456-856d-da1437216d6b ']' 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:10.887 [2024-07-10 13:43:50.224423] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.887 13:43:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.145 13:43:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.145 "name": "raid_bdev1", 00:20:11.145 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:11.145 "strip_size_kb": 0, 00:20:11.145 "state": "online", 00:20:11.145 "raid_level": "raid1", 00:20:11.145 "superblock": true, 00:20:11.145 "num_base_bdevs": 4, 00:20:11.145 "num_base_bdevs_discovered": 3, 00:20:11.145 "num_base_bdevs_operational": 3, 00:20:11.145 "base_bdevs_list": [ 00:20:11.145 { 00:20:11.145 "name": null, 00:20:11.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.145 "is_configured": false, 00:20:11.145 "data_offset": 2048, 00:20:11.145 "data_size": 63488 00:20:11.145 }, 00:20:11.145 { 00:20:11.145 "name": "pt2", 00:20:11.145 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:11.145 "is_configured": true, 00:20:11.145 "data_offset": 2048, 00:20:11.145 "data_size": 63488 00:20:11.145 }, 00:20:11.145 { 00:20:11.145 "name": "pt3", 00:20:11.145 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:11.145 "is_configured": true, 00:20:11.145 "data_offset": 2048, 00:20:11.145 "data_size": 63488 00:20:11.145 }, 00:20:11.145 { 00:20:11.145 "name": "pt4", 00:20:11.145 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:11.145 "is_configured": true, 00:20:11.145 "data_offset": 2048, 00:20:11.145 "data_size": 63488 00:20:11.145 } 00:20:11.145 ] 00:20:11.145 }' 00:20:11.145 13:43:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.145 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.716 13:43:50 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:11.974 [2024-07-10 13:43:51.162787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.974 [2024-07-10 13:43:51.162863] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.974 [2024-07-10 13:43:51.162952] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.974 [2024-07-10 13:43:51.163046] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.974 [2024-07-10 13:43:51.163064] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:11.974 13:43:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.974 13:43:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:12.232 13:43:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:12.490 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:12.490 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:12.490 13:43:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:12.749 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:12.749 13:43:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:12.749 13:43:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:12.749 13:43:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:12.749 13:43:51 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:12.750 [2024-07-10 13:43:52.067885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:12.750 [2024-07-10 13:43:52.068067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.750 [2024-07-10 13:43:52.068122] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:12.750 [2024-07-10 13:43:52.068168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.750 [2024-07-10 13:43:52.070153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.750 [2024-07-10 13:43:52.070257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:12.750 [2024-07-10 13:43:52.070433] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:12.750 [2024-07-10 13:43:52.070511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:12.750 pt2 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.750 13:43:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.009 13:43:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.009 "name": "raid_bdev1", 00:20:13.009 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:13.009 "strip_size_kb": 0, 00:20:13.009 "state": "configuring", 00:20:13.009 "raid_level": "raid1", 00:20:13.009 "superblock": true, 00:20:13.009 "num_base_bdevs": 4, 00:20:13.009 "num_base_bdevs_discovered": 1, 00:20:13.009 "num_base_bdevs_operational": 3, 00:20:13.009 "base_bdevs_list": [ 00:20:13.009 { 00:20:13.009 "name": null, 00:20:13.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.009 "is_configured": false, 00:20:13.009 "data_offset": 2048, 00:20:13.009 "data_size": 63488 00:20:13.009 }, 00:20:13.009 { 00:20:13.009 "name": "pt2", 00:20:13.009 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:13.009 "is_configured": true, 00:20:13.009 "data_offset": 2048, 00:20:13.009 "data_size": 63488 00:20:13.009 }, 00:20:13.009 { 00:20:13.009 "name": null, 00:20:13.009 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:13.009 "is_configured": false, 00:20:13.009 "data_offset": 2048, 00:20:13.009 "data_size": 63488 00:20:13.009 }, 00:20:13.009 { 00:20:13.009 "name": null, 00:20:13.009 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:13.009 "is_configured": false, 00:20:13.009 "data_offset": 2048, 00:20:13.009 "data_size": 63488 00:20:13.009 } 00:20:13.009 ] 00:20:13.009 }' 00:20:13.009 13:43:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.009 13:43:52 -- common/autotest_common.sh@10 -- # set +x 00:20:13.575 13:43:52 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:13.575 13:43:52 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:13.575 13:43:52 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:13.834 [2024-07-10 13:43:53.056145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:13.834 [2024-07-10 13:43:53.056295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.834 [2024-07-10 13:43:53.056346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:13.834 [2024-07-10 13:43:53.056386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.834 [2024-07-10 13:43:53.056834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.834 [2024-07-10 13:43:53.056907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:13.834 [2024-07-10 13:43:53.057048] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:13.834 [2024-07-10 13:43:53.057092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:13.834 pt3 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.834 13:43:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.093 13:43:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.093 "name": "raid_bdev1", 00:20:14.093 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:14.093 "strip_size_kb": 0, 00:20:14.093 "state": "configuring", 00:20:14.093 "raid_level": "raid1", 00:20:14.093 "superblock": true, 00:20:14.093 "num_base_bdevs": 4, 00:20:14.093 "num_base_bdevs_discovered": 2, 00:20:14.093 "num_base_bdevs_operational": 3, 00:20:14.093 "base_bdevs_list": [ 00:20:14.093 { 00:20:14.093 "name": null, 00:20:14.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.093 "is_configured": false, 00:20:14.093 "data_offset": 2048, 00:20:14.093 "data_size": 63488 00:20:14.093 }, 00:20:14.093 { 00:20:14.093 "name": "pt2", 00:20:14.093 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:14.093 "is_configured": true, 00:20:14.093 "data_offset": 2048, 00:20:14.093 "data_size": 63488 00:20:14.093 }, 00:20:14.093 { 00:20:14.093 "name": "pt3", 00:20:14.093 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:14.093 "is_configured": true, 00:20:14.093 "data_offset": 2048, 00:20:14.093 "data_size": 63488 00:20:14.093 }, 00:20:14.093 { 00:20:14.093 "name": null, 00:20:14.093 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:14.093 "is_configured": false, 00:20:14.093 "data_offset": 2048, 00:20:14.093 "data_size": 63488 00:20:14.093 } 00:20:14.093 ] 00:20:14.093 }' 00:20:14.093 13:43:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.093 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:20:14.676 13:43:53 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:14.676 13:43:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:14.676 13:43:53 -- bdev/bdev_raid.sh@462 -- # i=3 00:20:14.676 13:43:53 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:14.676 [2024-07-10 13:43:53.994456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:14.676 [2024-07-10 13:43:53.994590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.676 [2024-07-10 13:43:53.994635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:14.676 [2024-07-10 13:43:53.994666] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.676 [2024-07-10 13:43:53.995094] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.676 [2024-07-10 13:43:53.995150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:14.676 [2024-07-10 13:43:53.995268] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:14.676 [2024-07-10 13:43:53.995313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:14.676 [2024-07-10 13:43:53.995440] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:20:14.676 [2024-07-10 13:43:53.995469] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:14.676 [2024-07-10 13:43:53.995608] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:14.676 [2024-07-10 13:43:53.995940] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:20:14.676 [2024-07-10 13:43:53.995979] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:20:14.676 [2024-07-10 13:43:53.996164] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.676 pt4 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.676 13:43:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.934 13:43:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.934 "name": "raid_bdev1", 00:20:14.934 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:14.934 "strip_size_kb": 0, 00:20:14.934 "state": "online", 00:20:14.934 "raid_level": "raid1", 00:20:14.934 "superblock": true, 00:20:14.934 "num_base_bdevs": 4, 00:20:14.934 "num_base_bdevs_discovered": 3, 00:20:14.934 "num_base_bdevs_operational": 3, 00:20:14.934 "base_bdevs_list": [ 00:20:14.934 { 00:20:14.934 "name": null, 00:20:14.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.934 "is_configured": false, 00:20:14.934 "data_offset": 2048, 00:20:14.934 "data_size": 63488 00:20:14.934 }, 00:20:14.935 { 00:20:14.935 "name": "pt2", 00:20:14.935 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:14.935 "is_configured": true, 00:20:14.935 "data_offset": 2048, 00:20:14.935 "data_size": 63488 00:20:14.935 }, 00:20:14.935 { 00:20:14.935 "name": "pt3", 00:20:14.935 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:14.935 "is_configured": true, 00:20:14.935 "data_offset": 2048, 00:20:14.935 "data_size": 63488 00:20:14.935 }, 00:20:14.935 { 00:20:14.935 "name": "pt4", 00:20:14.935 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:14.935 "is_configured": true, 00:20:14.935 "data_offset": 2048, 00:20:14.935 "data_size": 63488 00:20:14.935 } 00:20:14.935 ] 00:20:14.935 }' 00:20:14.935 13:43:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.935 13:43:54 -- common/autotest_common.sh@10 -- # set +x 00:20:15.504 13:43:54 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:20:15.504 13:43:54 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:15.763 [2024-07-10 13:43:54.968744] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.763 [2024-07-10 13:43:54.968830] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.763 [2024-07-10 13:43:54.968935] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.763 [2024-07-10 13:43:54.969011] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.763 [2024-07-10 13:43:54.969028] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:20:15.763 13:43:54 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.763 13:43:54 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.023 [2024-07-10 13:43:55.316191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.023 [2024-07-10 13:43:55.316318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.023 [2024-07-10 13:43:55.316363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:16.023 [2024-07-10 13:43:55.316395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.023 [2024-07-10 13:43:55.318134] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.023 [2024-07-10 13:43:55.318246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.023 [2024-07-10 13:43:55.318357] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:16.023 [2024-07-10 13:43:55.318427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.023 pt1 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.023 13:43:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.282 13:43:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.282 "name": "raid_bdev1", 00:20:16.282 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:16.282 "strip_size_kb": 0, 00:20:16.282 "state": "configuring", 00:20:16.282 "raid_level": "raid1", 00:20:16.282 "superblock": true, 00:20:16.282 "num_base_bdevs": 4, 00:20:16.282 "num_base_bdevs_discovered": 1, 00:20:16.282 "num_base_bdevs_operational": 4, 00:20:16.282 "base_bdevs_list": [ 00:20:16.282 { 00:20:16.282 "name": "pt1", 00:20:16.282 "uuid": "df2c58ac-65ef-5822-8595-aada1f1319e5", 00:20:16.282 "is_configured": true, 00:20:16.282 "data_offset": 2048, 00:20:16.282 "data_size": 63488 00:20:16.282 }, 00:20:16.282 { 00:20:16.282 "name": null, 00:20:16.282 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:16.282 "is_configured": false, 00:20:16.282 "data_offset": 2048, 00:20:16.282 "data_size": 63488 00:20:16.282 }, 00:20:16.282 { 00:20:16.282 "name": null, 00:20:16.282 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:16.282 "is_configured": false, 00:20:16.282 "data_offset": 2048, 00:20:16.282 "data_size": 63488 00:20:16.282 }, 00:20:16.282 { 00:20:16.282 "name": null, 00:20:16.282 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:16.282 "is_configured": false, 00:20:16.282 "data_offset": 2048, 00:20:16.282 "data_size": 63488 00:20:16.282 } 00:20:16.282 ] 00:20:16.282 }' 00:20:16.282 13:43:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.282 13:43:55 -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:16.851 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:16.851 13:43:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:17.110 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:17.110 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:17.110 13:43:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:17.376 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:17.376 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:17.376 13:43:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:17.643 [2024-07-10 13:43:56.890980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:17.643 [2024-07-10 13:43:56.891461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.643 [2024-07-10 13:43:56.891605] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:20:17.643 [2024-07-10 13:43:56.891726] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.643 [2024-07-10 13:43:56.892264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.643 [2024-07-10 13:43:56.892445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:17.643 [2024-07-10 13:43:56.892668] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:17.643 [2024-07-10 13:43:56.892710] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:17.643 [2024-07-10 13:43:56.892739] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.643 [2024-07-10 13:43:56.892792] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:20:17.643 [2024-07-10 13:43:56.892907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:17.643 pt4 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.643 13:43:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.901 13:43:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.901 "name": "raid_bdev1", 00:20:17.901 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:17.901 "strip_size_kb": 0, 00:20:17.901 "state": "configuring", 00:20:17.901 "raid_level": "raid1", 00:20:17.901 "superblock": true, 00:20:17.901 "num_base_bdevs": 4, 00:20:17.901 "num_base_bdevs_discovered": 1, 00:20:17.901 "num_base_bdevs_operational": 3, 00:20:17.901 "base_bdevs_list": [ 00:20:17.901 { 00:20:17.901 "name": null, 00:20:17.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.901 "is_configured": false, 00:20:17.901 "data_offset": 2048, 00:20:17.901 "data_size": 63488 00:20:17.901 }, 00:20:17.901 { 00:20:17.901 "name": null, 00:20:17.901 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:17.901 "is_configured": false, 00:20:17.901 "data_offset": 2048, 00:20:17.901 "data_size": 63488 00:20:17.901 }, 00:20:17.901 { 00:20:17.901 "name": null, 00:20:17.901 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:17.901 "is_configured": false, 00:20:17.901 "data_offset": 2048, 00:20:17.901 "data_size": 63488 00:20:17.901 }, 00:20:17.901 { 00:20:17.901 "name": "pt4", 00:20:17.901 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:17.901 "is_configured": true, 00:20:17.901 "data_offset": 2048, 00:20:17.901 "data_size": 63488 00:20:17.901 } 00:20:17.901 ] 00:20:17.901 }' 00:20:17.901 13:43:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.901 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:20:18.469 13:43:57 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:18.469 13:43:57 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:18.469 13:43:57 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:18.728 [2024-07-10 13:43:57.865299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:18.728 [2024-07-10 13:43:57.865653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.728 [2024-07-10 13:43:57.865792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:18.728 [2024-07-10 13:43:57.865900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.728 [2024-07-10 13:43:57.866429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.728 [2024-07-10 13:43:57.866597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:18.728 [2024-07-10 13:43:57.866776] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:18.728 [2024-07-10 13:43:57.866826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:18.728 pt2 00:20:18.728 13:43:57 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:18.728 13:43:57 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:18.728 13:43:57 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:18.728 [2024-07-10 13:43:58.056995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:18.728 [2024-07-10 13:43:58.057508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.728 [2024-07-10 13:43:58.057644] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:20:18.728 [2024-07-10 13:43:58.057756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.728 [2024-07-10 13:43:58.058264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.728 [2024-07-10 13:43:58.058458] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:18.728 [2024-07-10 13:43:58.058666] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:18.728 [2024-07-10 13:43:58.058719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:18.728 [2024-07-10 13:43:58.058902] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:20:18.728 [2024-07-10 13:43:58.058935] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:18.728 [2024-07-10 13:43:58.059085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:20:18.728 [2024-07-10 13:43:58.059396] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:20:18.728 [2024-07-10 13:43:58.059436] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:20:18.728 [2024-07-10 13:43:58.059600] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.728 pt3 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.728 13:43:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.988 13:43:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.988 "name": "raid_bdev1", 00:20:18.988 "uuid": "536eb87f-b248-4456-856d-da1437216d6b", 00:20:18.988 "strip_size_kb": 0, 00:20:18.988 "state": "online", 00:20:18.988 "raid_level": "raid1", 00:20:18.988 "superblock": true, 00:20:18.988 "num_base_bdevs": 4, 00:20:18.988 "num_base_bdevs_discovered": 3, 00:20:18.988 "num_base_bdevs_operational": 3, 00:20:18.988 "base_bdevs_list": [ 00:20:18.988 { 00:20:18.988 "name": null, 00:20:18.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.988 "is_configured": false, 00:20:18.988 "data_offset": 2048, 00:20:18.988 "data_size": 63488 00:20:18.988 }, 00:20:18.988 { 00:20:18.988 "name": "pt2", 00:20:18.988 "uuid": "09df7afc-6228-5a8a-8c70-8cf1c14aa121", 00:20:18.988 "is_configured": true, 00:20:18.988 "data_offset": 2048, 00:20:18.988 "data_size": 63488 00:20:18.988 }, 00:20:18.988 { 00:20:18.988 "name": "pt3", 00:20:18.988 "uuid": "c242f26f-dbc7-5f88-a548-05d9c8e9bed0", 00:20:18.988 "is_configured": true, 00:20:18.988 "data_offset": 2048, 00:20:18.988 "data_size": 63488 00:20:18.988 }, 00:20:18.988 { 00:20:18.988 "name": "pt4", 00:20:18.988 "uuid": "a39c6415-63d0-52a7-9442-1b8d5266a519", 00:20:18.988 "is_configured": true, 00:20:18.988 "data_offset": 2048, 00:20:18.988 "data_size": 63488 00:20:18.988 } 00:20:18.988 ] 00:20:18.988 }' 00:20:18.988 13:43:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.988 13:43:58 -- common/autotest_common.sh@10 -- # set +x 00:20:19.556 13:43:58 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:19.556 13:43:58 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:19.815 [2024-07-10 13:43:58.979622] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.815 13:43:59 -- bdev/bdev_raid.sh@506 -- # '[' 536eb87f-b248-4456-856d-da1437216d6b '!=' 536eb87f-b248-4456-856d-da1437216d6b ']' 00:20:19.815 13:43:59 -- bdev/bdev_raid.sh@511 -- # killprocess 125011 00:20:19.815 13:43:59 -- common/autotest_common.sh@926 -- # '[' -z 125011 ']' 00:20:19.815 13:43:59 -- common/autotest_common.sh@930 -- # kill -0 125011 00:20:19.815 13:43:59 -- common/autotest_common.sh@931 -- # uname 00:20:19.815 13:43:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.815 13:43:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125011 00:20:19.815 killing process with pid 125011 00:20:19.815 13:43:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:19.815 13:43:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:19.815 13:43:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125011' 00:20:19.815 13:43:59 -- common/autotest_common.sh@945 -- # kill 125011 00:20:19.815 13:43:59 -- common/autotest_common.sh@950 -- # wait 125011 00:20:19.815 [2024-07-10 13:43:59.029742] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.815 [2024-07-10 13:43:59.029811] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.815 [2024-07-10 13:43:59.029890] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.815 [2024-07-10 13:43:59.029927] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:20:20.074 [2024-07-10 13:43:59.417678] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.453 ************************************ 00:20:21.453 END TEST raid_superblock_test 00:20:21.453 ************************************ 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:21.453 00:20:21.453 real 0m19.275s 00:20:21.453 user 0m34.994s 00:20:21.453 sys 0m2.348s 00:20:21.453 13:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.453 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:21.453 13:44:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:21.453 13:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:21.453 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:21.453 ************************************ 00:20:21.453 START TEST raid_rebuild_test 00:20:21.453 ************************************ 00:20:21.453 13:44:00 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@544 -- # raid_pid=125699 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125699 /var/tmp/spdk-raid.sock 00:20:21.453 13:44:00 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:21.453 13:44:00 -- common/autotest_common.sh@819 -- # '[' -z 125699 ']' 00:20:21.453 13:44:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:21.453 13:44:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:21.453 13:44:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:21.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:21.453 13:44:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:21.453 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:20:21.453 [2024-07-10 13:44:00.792321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:21.453 [2024-07-10 13:44:00.792504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125699 ] 00:20:21.453 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:21.453 Zero copy mechanism will not be used. 00:20:21.713 [2024-07-10 13:44:00.947327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.972 [2024-07-10 13:44:01.116331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.972 [2024-07-10 13:44:01.301861] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.540 13:44:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:22.540 13:44:01 -- common/autotest_common.sh@852 -- # return 0 00:20:22.540 13:44:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:22.540 13:44:01 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:22.540 13:44:01 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:22.540 BaseBdev1 00:20:22.540 13:44:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:22.540 13:44:01 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:22.540 13:44:01 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:22.799 BaseBdev2 00:20:22.799 13:44:02 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:23.059 spare_malloc 00:20:23.059 13:44:02 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:23.059 spare_delay 00:20:23.059 13:44:02 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:23.318 [2024-07-10 13:44:02.577395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:23.318 [2024-07-10 13:44:02.577543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.318 [2024-07-10 13:44:02.577587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:23.318 [2024-07-10 13:44:02.577642] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.318 [2024-07-10 13:44:02.579865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.318 [2024-07-10 13:44:02.579941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:23.318 spare 00:20:23.318 13:44:02 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:23.577 [2024-07-10 13:44:02.761120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.577 [2024-07-10 13:44:02.762917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.577 [2024-07-10 13:44:02.763034] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:20:23.577 [2024-07-10 13:44:02.763070] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:23.577 [2024-07-10 13:44:02.763283] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:23.577 [2024-07-10 13:44:02.763622] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:20:23.577 [2024-07-10 13:44:02.763666] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:20:23.577 [2024-07-10 13:44:02.763868] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.577 13:44:02 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.577 13:44:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.578 13:44:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.838 13:44:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.838 "name": "raid_bdev1", 00:20:23.838 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:23.838 "strip_size_kb": 0, 00:20:23.838 "state": "online", 00:20:23.838 "raid_level": "raid1", 00:20:23.838 "superblock": false, 00:20:23.838 "num_base_bdevs": 2, 00:20:23.838 "num_base_bdevs_discovered": 2, 00:20:23.838 "num_base_bdevs_operational": 2, 00:20:23.838 "base_bdevs_list": [ 00:20:23.838 { 00:20:23.838 "name": "BaseBdev1", 00:20:23.838 "uuid": "f89016f1-3056-4dcb-b0e2-d2c02a12944c", 00:20:23.838 "is_configured": true, 00:20:23.838 "data_offset": 0, 00:20:23.838 "data_size": 65536 00:20:23.838 }, 00:20:23.838 { 00:20:23.838 "name": "BaseBdev2", 00:20:23.838 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:23.838 "is_configured": true, 00:20:23.838 "data_offset": 0, 00:20:23.838 "data_size": 65536 00:20:23.838 } 00:20:23.838 ] 00:20:23.838 }' 00:20:23.838 13:44:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.838 13:44:02 -- common/autotest_common.sh@10 -- # set +x 00:20:24.404 13:44:03 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:24.404 13:44:03 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:24.404 [2024-07-10 13:44:03.715705] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.404 13:44:03 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:24.404 13:44:03 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:24.404 13:44:03 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.685 13:44:03 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:24.685 13:44:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:24.685 13:44:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:24.685 13:44:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@12 -- # local i 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:24.685 13:44:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:24.956 [2024-07-10 13:44:04.051019] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:24.956 /dev/nbd0 00:20:24.956 13:44:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:24.956 13:44:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:24.956 13:44:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:24.956 13:44:04 -- common/autotest_common.sh@857 -- # local i 00:20:24.956 13:44:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:24.956 13:44:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:24.956 13:44:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:24.956 13:44:04 -- common/autotest_common.sh@861 -- # break 00:20:24.956 13:44:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:24.956 13:44:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:24.956 13:44:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.956 1+0 records in 00:20:24.956 1+0 records out 00:20:24.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686785 s, 6.0 MB/s 00:20:24.956 13:44:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.956 13:44:04 -- common/autotest_common.sh@874 -- # size=4096 00:20:24.956 13:44:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.956 13:44:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:24.956 13:44:04 -- common/autotest_common.sh@877 -- # return 0 00:20:24.956 13:44:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.956 13:44:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:24.956 13:44:04 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:24.956 13:44:04 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:24.956 13:44:04 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:28.244 65536+0 records in 00:20:28.244 65536+0 records out 00:20:28.244 33554432 bytes (34 MB, 32 MiB) copied, 3.0893 s, 10.9 MB/s 00:20:28.244 13:44:07 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@51 -- # local i 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:28.244 [2024-07-10 13:44:07.405635] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@41 -- # break 00:20:28.244 13:44:07 -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.244 13:44:07 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:28.503 [2024-07-10 13:44:07.672893] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.503 13:44:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.762 13:44:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.762 "name": "raid_bdev1", 00:20:28.762 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:28.762 "strip_size_kb": 0, 00:20:28.762 "state": "online", 00:20:28.762 "raid_level": "raid1", 00:20:28.762 "superblock": false, 00:20:28.762 "num_base_bdevs": 2, 00:20:28.762 "num_base_bdevs_discovered": 1, 00:20:28.762 "num_base_bdevs_operational": 1, 00:20:28.762 "base_bdevs_list": [ 00:20:28.762 { 00:20:28.762 "name": null, 00:20:28.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.762 "is_configured": false, 00:20:28.762 "data_offset": 0, 00:20:28.762 "data_size": 65536 00:20:28.762 }, 00:20:28.762 { 00:20:28.762 "name": "BaseBdev2", 00:20:28.762 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:28.762 "is_configured": true, 00:20:28.762 "data_offset": 0, 00:20:28.762 "data_size": 65536 00:20:28.762 } 00:20:28.762 ] 00:20:28.762 }' 00:20:28.762 13:44:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.762 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 13:44:08 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.329 [2024-07-10 13:44:08.651227] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:29.329 [2024-07-10 13:44:08.651280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.329 [2024-07-10 13:44:08.666560] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:20:29.329 [2024-07-10 13:44:08.668145] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.329 13:44:08 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.706 "name": "raid_bdev1", 00:20:30.706 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:30.706 "strip_size_kb": 0, 00:20:30.706 "state": "online", 00:20:30.706 "raid_level": "raid1", 00:20:30.706 "superblock": false, 00:20:30.706 "num_base_bdevs": 2, 00:20:30.706 "num_base_bdevs_discovered": 2, 00:20:30.706 "num_base_bdevs_operational": 2, 00:20:30.706 "process": { 00:20:30.706 "type": "rebuild", 00:20:30.706 "target": "spare", 00:20:30.706 "progress": { 00:20:30.706 "blocks": 22528, 00:20:30.706 "percent": 34 00:20:30.706 } 00:20:30.706 }, 00:20:30.706 "base_bdevs_list": [ 00:20:30.706 { 00:20:30.706 "name": "spare", 00:20:30.706 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:30.706 "is_configured": true, 00:20:30.706 "data_offset": 0, 00:20:30.706 "data_size": 65536 00:20:30.706 }, 00:20:30.706 { 00:20:30.706 "name": "BaseBdev2", 00:20:30.706 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:30.706 "is_configured": true, 00:20:30.706 "data_offset": 0, 00:20:30.706 "data_size": 65536 00:20:30.706 } 00:20:30.706 ] 00:20:30.706 }' 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.706 13:44:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:30.965 [2024-07-10 13:44:10.123625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.965 [2024-07-10 13:44:10.174992] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.965 [2024-07-10 13:44:10.175079] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.965 13:44:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.225 13:44:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.225 "name": "raid_bdev1", 00:20:31.225 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:31.225 "strip_size_kb": 0, 00:20:31.225 "state": "online", 00:20:31.225 "raid_level": "raid1", 00:20:31.225 "superblock": false, 00:20:31.225 "num_base_bdevs": 2, 00:20:31.225 "num_base_bdevs_discovered": 1, 00:20:31.225 "num_base_bdevs_operational": 1, 00:20:31.225 "base_bdevs_list": [ 00:20:31.225 { 00:20:31.225 "name": null, 00:20:31.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.225 "is_configured": false, 00:20:31.225 "data_offset": 0, 00:20:31.225 "data_size": 65536 00:20:31.225 }, 00:20:31.225 { 00:20:31.225 "name": "BaseBdev2", 00:20:31.225 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:31.225 "is_configured": true, 00:20:31.225 "data_offset": 0, 00:20:31.225 "data_size": 65536 00:20:31.225 } 00:20:31.225 ] 00:20:31.225 }' 00:20:31.225 13:44:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.225 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:20:31.793 13:44:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.794 13:44:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:31.794 13:44:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:31.794 13:44:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:31.794 13:44:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:31.794 13:44:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.794 13:44:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.794 13:44:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.794 "name": "raid_bdev1", 00:20:31.794 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:31.794 "strip_size_kb": 0, 00:20:31.794 "state": "online", 00:20:31.794 "raid_level": "raid1", 00:20:31.794 "superblock": false, 00:20:31.794 "num_base_bdevs": 2, 00:20:31.794 "num_base_bdevs_discovered": 1, 00:20:31.794 "num_base_bdevs_operational": 1, 00:20:31.794 "base_bdevs_list": [ 00:20:31.794 { 00:20:31.794 "name": null, 00:20:31.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.794 "is_configured": false, 00:20:31.794 "data_offset": 0, 00:20:31.794 "data_size": 65536 00:20:31.794 }, 00:20:31.794 { 00:20:31.794 "name": "BaseBdev2", 00:20:31.794 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:31.794 "is_configured": true, 00:20:31.794 "data_offset": 0, 00:20:31.794 "data_size": 65536 00:20:31.794 } 00:20:31.794 ] 00:20:31.794 }' 00:20:31.794 13:44:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.055 13:44:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.055 13:44:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.055 13:44:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:32.055 13:44:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:32.314 [2024-07-10 13:44:11.416839] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:32.314 [2024-07-10 13:44:11.416883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:32.314 [2024-07-10 13:44:11.431110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:20:32.314 [2024-07-10 13:44:11.432627] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.314 13:44:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.252 13:44:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:33.511 "name": "raid_bdev1", 00:20:33.511 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:33.511 "strip_size_kb": 0, 00:20:33.511 "state": "online", 00:20:33.511 "raid_level": "raid1", 00:20:33.511 "superblock": false, 00:20:33.511 "num_base_bdevs": 2, 00:20:33.511 "num_base_bdevs_discovered": 2, 00:20:33.511 "num_base_bdevs_operational": 2, 00:20:33.511 "process": { 00:20:33.511 "type": "rebuild", 00:20:33.511 "target": "spare", 00:20:33.511 "progress": { 00:20:33.511 "blocks": 22528, 00:20:33.511 "percent": 34 00:20:33.511 } 00:20:33.511 }, 00:20:33.511 "base_bdevs_list": [ 00:20:33.511 { 00:20:33.511 "name": "spare", 00:20:33.511 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:33.511 "is_configured": true, 00:20:33.511 "data_offset": 0, 00:20:33.511 "data_size": 65536 00:20:33.511 }, 00:20:33.511 { 00:20:33.511 "name": "BaseBdev2", 00:20:33.511 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:33.511 "is_configured": true, 00:20:33.511 "data_offset": 0, 00:20:33.511 "data_size": 65536 00:20:33.511 } 00:20:33.511 ] 00:20:33.511 }' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@657 -- # local timeout=367 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.511 13:44:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.782 13:44:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:33.782 "name": "raid_bdev1", 00:20:33.782 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:33.782 "strip_size_kb": 0, 00:20:33.782 "state": "online", 00:20:33.782 "raid_level": "raid1", 00:20:33.782 "superblock": false, 00:20:33.782 "num_base_bdevs": 2, 00:20:33.782 "num_base_bdevs_discovered": 2, 00:20:33.782 "num_base_bdevs_operational": 2, 00:20:33.782 "process": { 00:20:33.782 "type": "rebuild", 00:20:33.782 "target": "spare", 00:20:33.782 "progress": { 00:20:33.782 "blocks": 28672, 00:20:33.782 "percent": 43 00:20:33.782 } 00:20:33.782 }, 00:20:33.782 "base_bdevs_list": [ 00:20:33.782 { 00:20:33.782 "name": "spare", 00:20:33.782 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:33.782 "is_configured": true, 00:20:33.782 "data_offset": 0, 00:20:33.782 "data_size": 65536 00:20:33.782 }, 00:20:33.782 { 00:20:33.782 "name": "BaseBdev2", 00:20:33.782 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:33.782 "is_configured": true, 00:20:33.782 "data_offset": 0, 00:20:33.782 "data_size": 65536 00:20:33.782 } 00:20:33.782 ] 00:20:33.782 }' 00:20:33.782 13:44:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:33.782 13:44:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.782 13:44:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:33.782 13:44:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.782 13:44:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.731 13:44:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.990 13:44:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.990 "name": "raid_bdev1", 00:20:34.990 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:34.990 "strip_size_kb": 0, 00:20:34.990 "state": "online", 00:20:34.990 "raid_level": "raid1", 00:20:34.990 "superblock": false, 00:20:34.990 "num_base_bdevs": 2, 00:20:34.990 "num_base_bdevs_discovered": 2, 00:20:34.990 "num_base_bdevs_operational": 2, 00:20:34.990 "process": { 00:20:34.990 "type": "rebuild", 00:20:34.990 "target": "spare", 00:20:34.990 "progress": { 00:20:34.990 "blocks": 55296, 00:20:34.990 "percent": 84 00:20:34.990 } 00:20:34.990 }, 00:20:34.990 "base_bdevs_list": [ 00:20:34.990 { 00:20:34.990 "name": "spare", 00:20:34.990 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:34.990 "is_configured": true, 00:20:34.990 "data_offset": 0, 00:20:34.990 "data_size": 65536 00:20:34.990 }, 00:20:34.990 { 00:20:34.990 "name": "BaseBdev2", 00:20:34.990 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:34.990 "is_configured": true, 00:20:34.990 "data_offset": 0, 00:20:34.990 "data_size": 65536 00:20:34.990 } 00:20:34.990 ] 00:20:34.990 }' 00:20:34.990 13:44:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.990 13:44:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.990 13:44:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.990 13:44:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.990 13:44:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:35.558 [2024-07-10 13:44:14.646848] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:35.558 [2024-07-10 13:44:14.646941] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:35.558 [2024-07-10 13:44:14.647058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.125 13:44:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.384 "name": "raid_bdev1", 00:20:36.384 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:36.384 "strip_size_kb": 0, 00:20:36.384 "state": "online", 00:20:36.384 "raid_level": "raid1", 00:20:36.384 "superblock": false, 00:20:36.384 "num_base_bdevs": 2, 00:20:36.384 "num_base_bdevs_discovered": 2, 00:20:36.384 "num_base_bdevs_operational": 2, 00:20:36.384 "base_bdevs_list": [ 00:20:36.384 { 00:20:36.384 "name": "spare", 00:20:36.384 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:36.384 "is_configured": true, 00:20:36.384 "data_offset": 0, 00:20:36.384 "data_size": 65536 00:20:36.384 }, 00:20:36.384 { 00:20:36.384 "name": "BaseBdev2", 00:20:36.384 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:36.384 "is_configured": true, 00:20:36.384 "data_offset": 0, 00:20:36.384 "data_size": 65536 00:20:36.384 } 00:20:36.384 ] 00:20:36.384 }' 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@660 -- # break 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.384 13:44:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.642 "name": "raid_bdev1", 00:20:36.642 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:36.642 "strip_size_kb": 0, 00:20:36.642 "state": "online", 00:20:36.642 "raid_level": "raid1", 00:20:36.642 "superblock": false, 00:20:36.642 "num_base_bdevs": 2, 00:20:36.642 "num_base_bdevs_discovered": 2, 00:20:36.642 "num_base_bdevs_operational": 2, 00:20:36.642 "base_bdevs_list": [ 00:20:36.642 { 00:20:36.642 "name": "spare", 00:20:36.642 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:36.642 "is_configured": true, 00:20:36.642 "data_offset": 0, 00:20:36.642 "data_size": 65536 00:20:36.642 }, 00:20:36.642 { 00:20:36.642 "name": "BaseBdev2", 00:20:36.642 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:36.642 "is_configured": true, 00:20:36.642 "data_offset": 0, 00:20:36.642 "data_size": 65536 00:20:36.642 } 00:20:36.642 ] 00:20:36.642 }' 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.642 13:44:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.901 13:44:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.901 "name": "raid_bdev1", 00:20:36.901 "uuid": "a7a705ea-0e00-46c8-b941-d09b59ebd5b2", 00:20:36.901 "strip_size_kb": 0, 00:20:36.901 "state": "online", 00:20:36.901 "raid_level": "raid1", 00:20:36.901 "superblock": false, 00:20:36.901 "num_base_bdevs": 2, 00:20:36.901 "num_base_bdevs_discovered": 2, 00:20:36.901 "num_base_bdevs_operational": 2, 00:20:36.901 "base_bdevs_list": [ 00:20:36.901 { 00:20:36.901 "name": "spare", 00:20:36.901 "uuid": "3b9eadcf-beab-5f7d-b549-109e088bc4ef", 00:20:36.901 "is_configured": true, 00:20:36.901 "data_offset": 0, 00:20:36.901 "data_size": 65536 00:20:36.901 }, 00:20:36.901 { 00:20:36.901 "name": "BaseBdev2", 00:20:36.901 "uuid": "a3dfd3c0-0130-4009-95fb-06eca986cbe6", 00:20:36.901 "is_configured": true, 00:20:36.901 "data_offset": 0, 00:20:36.901 "data_size": 65536 00:20:36.901 } 00:20:36.901 ] 00:20:36.901 }' 00:20:36.901 13:44:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.901 13:44:16 -- common/autotest_common.sh@10 -- # set +x 00:20:37.467 13:44:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.725 [2024-07-10 13:44:16.840251] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.725 [2024-07-10 13:44:16.840285] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.725 [2024-07-10 13:44:16.840364] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.725 [2024-07-10 13:44:16.840420] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.725 [2024-07-10 13:44:16.840428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:37.725 13:44:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.725 13:44:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:37.725 13:44:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:37.725 13:44:17 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:37.725 13:44:17 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@12 -- # local i 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:37.725 13:44:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:37.985 /dev/nbd0 00:20:37.985 13:44:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:37.985 13:44:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:37.985 13:44:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:37.985 13:44:17 -- common/autotest_common.sh@857 -- # local i 00:20:37.985 13:44:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:37.985 13:44:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:37.985 13:44:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:37.985 13:44:17 -- common/autotest_common.sh@861 -- # break 00:20:37.985 13:44:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:37.985 13:44:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:37.985 13:44:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.985 1+0 records in 00:20:37.985 1+0 records out 00:20:37.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303987 s, 13.5 MB/s 00:20:37.985 13:44:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.985 13:44:17 -- common/autotest_common.sh@874 -- # size=4096 00:20:37.985 13:44:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.985 13:44:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:37.985 13:44:17 -- common/autotest_common.sh@877 -- # return 0 00:20:37.985 13:44:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:37.985 13:44:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:37.985 13:44:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:38.244 /dev/nbd1 00:20:38.244 13:44:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:38.244 13:44:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:38.244 13:44:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:38.244 13:44:17 -- common/autotest_common.sh@857 -- # local i 00:20:38.244 13:44:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:38.244 13:44:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:38.244 13:44:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:38.244 13:44:17 -- common/autotest_common.sh@861 -- # break 00:20:38.244 13:44:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:38.244 13:44:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:38.244 13:44:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.244 1+0 records in 00:20:38.244 1+0 records out 00:20:38.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318816 s, 12.8 MB/s 00:20:38.244 13:44:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.244 13:44:17 -- common/autotest_common.sh@874 -- # size=4096 00:20:38.244 13:44:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.244 13:44:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:38.244 13:44:17 -- common/autotest_common.sh@877 -- # return 0 00:20:38.244 13:44:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.244 13:44:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.244 13:44:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:38.505 13:44:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@51 -- # local i 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:38.505 13:44:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@41 -- # break 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.775 13:44:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@41 -- # break 00:20:39.034 13:44:18 -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.034 13:44:18 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:39.034 13:44:18 -- bdev/bdev_raid.sh@709 -- # killprocess 125699 00:20:39.034 13:44:18 -- common/autotest_common.sh@926 -- # '[' -z 125699 ']' 00:20:39.034 13:44:18 -- common/autotest_common.sh@930 -- # kill -0 125699 00:20:39.034 13:44:18 -- common/autotest_common.sh@931 -- # uname 00:20:39.034 13:44:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.034 13:44:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125699 00:20:39.034 13:44:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:39.034 killing process with pid 125699 00:20:39.034 13:44:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:39.034 13:44:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125699' 00:20:39.034 13:44:18 -- common/autotest_common.sh@945 -- # kill 125699 00:20:39.034 Received shutdown signal, test time was about 60.000000 seconds 00:20:39.034 00:20:39.034 Latency(us) 00:20:39.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.034 =================================================================================================================== 00:20:39.034 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.034 [2024-07-10 13:44:18.300596] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.034 13:44:18 -- common/autotest_common.sh@950 -- # wait 125699 00:20:39.292 [2024-07-10 13:44:18.565175] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:40.669 00:20:40.669 real 0m19.028s 00:20:40.669 user 0m25.983s 00:20:40.669 sys 0m3.361s 00:20:40.669 13:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.669 13:44:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.669 ************************************ 00:20:40.669 END TEST raid_rebuild_test 00:20:40.669 ************************************ 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:40.669 13:44:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:40.669 13:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:40.669 13:44:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.669 ************************************ 00:20:40.669 START TEST raid_rebuild_test_sb 00:20:40.669 ************************************ 00:20:40.669 13:44:19 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@544 -- # raid_pid=126239 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126239 /var/tmp/spdk-raid.sock 00:20:40.669 13:44:19 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:40.669 13:44:19 -- common/autotest_common.sh@819 -- # '[' -z 126239 ']' 00:20:40.669 13:44:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:40.669 13:44:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:40.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:40.669 13:44:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:40.669 13:44:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:40.669 13:44:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.669 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.669 Zero copy mechanism will not be used. 00:20:40.669 [2024-07-10 13:44:19.884806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:40.669 [2024-07-10 13:44:19.884943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126239 ] 00:20:40.926 [2024-07-10 13:44:20.039027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.926 [2024-07-10 13:44:20.224126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.185 [2024-07-10 13:44:20.403004] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.442 13:44:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:41.442 13:44:20 -- common/autotest_common.sh@852 -- # return 0 00:20:41.442 13:44:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:41.442 13:44:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:41.442 13:44:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:41.699 BaseBdev1_malloc 00:20:41.699 13:44:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.699 [2024-07-10 13:44:21.036491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.699 [2024-07-10 13:44:21.036577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.699 [2024-07-10 13:44:21.036601] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:41.699 [2024-07-10 13:44:21.036631] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.699 [2024-07-10 13:44:21.038473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.699 [2024-07-10 13:44:21.038514] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.699 BaseBdev1 00:20:41.699 13:44:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:41.699 13:44:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:41.699 13:44:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:41.966 BaseBdev2_malloc 00:20:41.966 13:44:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:42.224 [2024-07-10 13:44:21.465901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:42.224 [2024-07-10 13:44:21.465982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.224 [2024-07-10 13:44:21.466015] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:42.224 [2024-07-10 13:44:21.466053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.224 [2024-07-10 13:44:21.467938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.224 [2024-07-10 13:44:21.467984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:42.224 BaseBdev2 00:20:42.224 13:44:21 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:42.482 spare_malloc 00:20:42.482 13:44:21 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:42.741 spare_delay 00:20:42.741 13:44:21 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:42.741 [2024-07-10 13:44:21.998987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:42.741 [2024-07-10 13:44:21.999065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.741 [2024-07-10 13:44:21.999099] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:42.741 [2024-07-10 13:44:21.999129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.741 [2024-07-10 13:44:22.000909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.741 [2024-07-10 13:44:22.000971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:42.741 spare 00:20:42.741 13:44:22 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:42.999 [2024-07-10 13:44:22.190781] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.999 [2024-07-10 13:44:22.192550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:42.999 [2024-07-10 13:44:22.192744] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:42.999 [2024-07-10 13:44:22.192762] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:42.999 [2024-07-10 13:44:22.192891] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:42.999 [2024-07-10 13:44:22.193227] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:42.999 [2024-07-10 13:44:22.193253] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:42.999 [2024-07-10 13:44:22.193447] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.999 13:44:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.256 13:44:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.256 "name": "raid_bdev1", 00:20:43.256 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:43.256 "strip_size_kb": 0, 00:20:43.256 "state": "online", 00:20:43.256 "raid_level": "raid1", 00:20:43.256 "superblock": true, 00:20:43.256 "num_base_bdevs": 2, 00:20:43.256 "num_base_bdevs_discovered": 2, 00:20:43.256 "num_base_bdevs_operational": 2, 00:20:43.256 "base_bdevs_list": [ 00:20:43.256 { 00:20:43.256 "name": "BaseBdev1", 00:20:43.256 "uuid": "43e7013c-3f83-5916-a499-eae8587d433a", 00:20:43.256 "is_configured": true, 00:20:43.256 "data_offset": 2048, 00:20:43.256 "data_size": 63488 00:20:43.256 }, 00:20:43.256 { 00:20:43.256 "name": "BaseBdev2", 00:20:43.256 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:43.256 "is_configured": true, 00:20:43.256 "data_offset": 2048, 00:20:43.256 "data_size": 63488 00:20:43.256 } 00:20:43.256 ] 00:20:43.256 }' 00:20:43.256 13:44:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.256 13:44:22 -- common/autotest_common.sh@10 -- # set +x 00:20:43.820 13:44:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:43.820 13:44:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:43.820 [2024-07-10 13:44:23.085322] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.820 13:44:23 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:43.820 13:44:23 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:43.820 13:44:23 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.079 13:44:23 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:44.079 13:44:23 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:44.079 13:44:23 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:44.079 13:44:23 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@12 -- # local i 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.079 13:44:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:44.360 [2024-07-10 13:44:23.456534] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:44.360 /dev/nbd0 00:20:44.360 13:44:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:44.360 13:44:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:44.360 13:44:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:44.361 13:44:23 -- common/autotest_common.sh@857 -- # local i 00:20:44.361 13:44:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:44.361 13:44:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:44.361 13:44:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:44.361 13:44:23 -- common/autotest_common.sh@861 -- # break 00:20:44.361 13:44:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:44.361 13:44:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:44.361 13:44:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.361 1+0 records in 00:20:44.361 1+0 records out 00:20:44.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215799 s, 19.0 MB/s 00:20:44.361 13:44:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.361 13:44:23 -- common/autotest_common.sh@874 -- # size=4096 00:20:44.361 13:44:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.361 13:44:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:44.361 13:44:23 -- common/autotest_common.sh@877 -- # return 0 00:20:44.361 13:44:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.361 13:44:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.361 13:44:23 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:44.361 13:44:23 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:44.361 13:44:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:47.649 63488+0 records in 00:20:47.649 63488+0 records out 00:20:47.649 32505856 bytes (33 MB, 31 MiB) copied, 3.35356 s, 9.7 MB/s 00:20:47.649 13:44:26 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:47.649 13:44:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:47.649 13:44:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:47.649 13:44:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.649 13:44:26 -- bdev/nbd_common.sh@51 -- # local i 00:20:47.649 13:44:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.649 13:44:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:47.908 [2024-07-10 13:44:27.089698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@41 -- # break 00:20:47.908 13:44:27 -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.908 13:44:27 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:48.167 [2024-07-10 13:44:27.376893] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.167 13:44:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.426 13:44:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.426 "name": "raid_bdev1", 00:20:48.426 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:48.426 "strip_size_kb": 0, 00:20:48.426 "state": "online", 00:20:48.426 "raid_level": "raid1", 00:20:48.426 "superblock": true, 00:20:48.426 "num_base_bdevs": 2, 00:20:48.426 "num_base_bdevs_discovered": 1, 00:20:48.426 "num_base_bdevs_operational": 1, 00:20:48.426 "base_bdevs_list": [ 00:20:48.426 { 00:20:48.426 "name": null, 00:20:48.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.426 "is_configured": false, 00:20:48.426 "data_offset": 2048, 00:20:48.426 "data_size": 63488 00:20:48.426 }, 00:20:48.426 { 00:20:48.426 "name": "BaseBdev2", 00:20:48.426 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:48.426 "is_configured": true, 00:20:48.426 "data_offset": 2048, 00:20:48.426 "data_size": 63488 00:20:48.426 } 00:20:48.426 ] 00:20:48.426 }' 00:20:48.426 13:44:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.426 13:44:27 -- common/autotest_common.sh@10 -- # set +x 00:20:48.997 13:44:28 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:48.997 [2024-07-10 13:44:28.319297] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:48.997 [2024-07-10 13:44:28.319341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.997 [2024-07-10 13:44:28.334203] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:20:48.997 [2024-07-10 13:44:28.335911] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:48.997 13:44:28 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.382 13:44:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.382 "name": "raid_bdev1", 00:20:50.382 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:50.382 "strip_size_kb": 0, 00:20:50.382 "state": "online", 00:20:50.382 "raid_level": "raid1", 00:20:50.382 "superblock": true, 00:20:50.382 "num_base_bdevs": 2, 00:20:50.382 "num_base_bdevs_discovered": 2, 00:20:50.382 "num_base_bdevs_operational": 2, 00:20:50.382 "process": { 00:20:50.382 "type": "rebuild", 00:20:50.382 "target": "spare", 00:20:50.382 "progress": { 00:20:50.382 "blocks": 22528, 00:20:50.382 "percent": 35 00:20:50.382 } 00:20:50.382 }, 00:20:50.382 "base_bdevs_list": [ 00:20:50.382 { 00:20:50.382 "name": "spare", 00:20:50.382 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:50.382 "is_configured": true, 00:20:50.382 "data_offset": 2048, 00:20:50.382 "data_size": 63488 00:20:50.382 }, 00:20:50.382 { 00:20:50.382 "name": "BaseBdev2", 00:20:50.382 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:50.382 "is_configured": true, 00:20:50.383 "data_offset": 2048, 00:20:50.383 "data_size": 63488 00:20:50.383 } 00:20:50.383 ] 00:20:50.383 }' 00:20:50.383 13:44:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.383 13:44:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.383 13:44:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.383 13:44:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.383 13:44:29 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:50.642 [2024-07-10 13:44:29.763319] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.642 [2024-07-10 13:44:29.842746] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.642 [2024-07-10 13:44:29.842832] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.642 13:44:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.902 13:44:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.902 "name": "raid_bdev1", 00:20:50.902 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:50.902 "strip_size_kb": 0, 00:20:50.902 "state": "online", 00:20:50.902 "raid_level": "raid1", 00:20:50.902 "superblock": true, 00:20:50.902 "num_base_bdevs": 2, 00:20:50.902 "num_base_bdevs_discovered": 1, 00:20:50.902 "num_base_bdevs_operational": 1, 00:20:50.902 "base_bdevs_list": [ 00:20:50.902 { 00:20:50.902 "name": null, 00:20:50.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.902 "is_configured": false, 00:20:50.902 "data_offset": 2048, 00:20:50.902 "data_size": 63488 00:20:50.902 }, 00:20:50.902 { 00:20:50.902 "name": "BaseBdev2", 00:20:50.902 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:50.902 "is_configured": true, 00:20:50.902 "data_offset": 2048, 00:20:50.902 "data_size": 63488 00:20:50.902 } 00:20:50.902 ] 00:20:50.902 }' 00:20:50.902 13:44:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.902 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.471 13:44:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.731 13:44:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.731 "name": "raid_bdev1", 00:20:51.731 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:51.731 "strip_size_kb": 0, 00:20:51.731 "state": "online", 00:20:51.731 "raid_level": "raid1", 00:20:51.731 "superblock": true, 00:20:51.731 "num_base_bdevs": 2, 00:20:51.731 "num_base_bdevs_discovered": 1, 00:20:51.731 "num_base_bdevs_operational": 1, 00:20:51.731 "base_bdevs_list": [ 00:20:51.731 { 00:20:51.731 "name": null, 00:20:51.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.731 "is_configured": false, 00:20:51.731 "data_offset": 2048, 00:20:51.731 "data_size": 63488 00:20:51.731 }, 00:20:51.731 { 00:20:51.731 "name": "BaseBdev2", 00:20:51.731 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:51.731 "is_configured": true, 00:20:51.731 "data_offset": 2048, 00:20:51.731 "data_size": 63488 00:20:51.731 } 00:20:51.731 ] 00:20:51.731 }' 00:20:51.731 13:44:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.731 13:44:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.731 13:44:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.731 13:44:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:51.731 13:44:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.990 [2024-07-10 13:44:31.127279] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:51.990 [2024-07-10 13:44:31.127321] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.990 [2024-07-10 13:44:31.142033] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:20:51.990 [2024-07-10 13:44:31.143496] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:51.990 13:44:31 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:52.927 13:44:32 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.927 13:44:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:52.927 13:44:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:52.927 13:44:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:52.927 13:44:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:52.928 13:44:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.928 13:44:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.187 "name": "raid_bdev1", 00:20:53.187 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:53.187 "strip_size_kb": 0, 00:20:53.187 "state": "online", 00:20:53.187 "raid_level": "raid1", 00:20:53.187 "superblock": true, 00:20:53.187 "num_base_bdevs": 2, 00:20:53.187 "num_base_bdevs_discovered": 2, 00:20:53.187 "num_base_bdevs_operational": 2, 00:20:53.187 "process": { 00:20:53.187 "type": "rebuild", 00:20:53.187 "target": "spare", 00:20:53.187 "progress": { 00:20:53.187 "blocks": 22528, 00:20:53.187 "percent": 35 00:20:53.187 } 00:20:53.187 }, 00:20:53.187 "base_bdevs_list": [ 00:20:53.187 { 00:20:53.187 "name": "spare", 00:20:53.187 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:53.187 "is_configured": true, 00:20:53.187 "data_offset": 2048, 00:20:53.187 "data_size": 63488 00:20:53.187 }, 00:20:53.187 { 00:20:53.187 "name": "BaseBdev2", 00:20:53.187 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:53.187 "is_configured": true, 00:20:53.187 "data_offset": 2048, 00:20:53.187 "data_size": 63488 00:20:53.187 } 00:20:53.187 ] 00:20:53.187 }' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:53.187 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@657 -- # local timeout=387 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.187 13:44:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.446 13:44:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.446 "name": "raid_bdev1", 00:20:53.446 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:53.446 "strip_size_kb": 0, 00:20:53.446 "state": "online", 00:20:53.446 "raid_level": "raid1", 00:20:53.446 "superblock": true, 00:20:53.446 "num_base_bdevs": 2, 00:20:53.446 "num_base_bdevs_discovered": 2, 00:20:53.446 "num_base_bdevs_operational": 2, 00:20:53.446 "process": { 00:20:53.446 "type": "rebuild", 00:20:53.446 "target": "spare", 00:20:53.446 "progress": { 00:20:53.446 "blocks": 28672, 00:20:53.446 "percent": 45 00:20:53.446 } 00:20:53.446 }, 00:20:53.446 "base_bdevs_list": [ 00:20:53.446 { 00:20:53.446 "name": "spare", 00:20:53.446 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:53.446 "is_configured": true, 00:20:53.446 "data_offset": 2048, 00:20:53.446 "data_size": 63488 00:20:53.446 }, 00:20:53.446 { 00:20:53.446 "name": "BaseBdev2", 00:20:53.446 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:53.446 "is_configured": true, 00:20:53.446 "data_offset": 2048, 00:20:53.446 "data_size": 63488 00:20:53.446 } 00:20:53.446 ] 00:20:53.446 }' 00:20:53.446 13:44:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.446 13:44:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.446 13:44:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.446 13:44:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.446 13:44:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.834 "name": "raid_bdev1", 00:20:54.834 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:54.834 "strip_size_kb": 0, 00:20:54.834 "state": "online", 00:20:54.834 "raid_level": "raid1", 00:20:54.834 "superblock": true, 00:20:54.834 "num_base_bdevs": 2, 00:20:54.834 "num_base_bdevs_discovered": 2, 00:20:54.834 "num_base_bdevs_operational": 2, 00:20:54.834 "process": { 00:20:54.834 "type": "rebuild", 00:20:54.834 "target": "spare", 00:20:54.834 "progress": { 00:20:54.834 "blocks": 55296, 00:20:54.834 "percent": 87 00:20:54.834 } 00:20:54.834 }, 00:20:54.834 "base_bdevs_list": [ 00:20:54.834 { 00:20:54.834 "name": "spare", 00:20:54.834 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:54.834 "is_configured": true, 00:20:54.834 "data_offset": 2048, 00:20:54.834 "data_size": 63488 00:20:54.834 }, 00:20:54.834 { 00:20:54.834 "name": "BaseBdev2", 00:20:54.834 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:54.834 "is_configured": true, 00:20:54.834 "data_offset": 2048, 00:20:54.834 "data_size": 63488 00:20:54.834 } 00:20:54.834 ] 00:20:54.834 }' 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.834 13:44:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.834 13:44:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.834 13:44:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:55.112 [2024-07-10 13:44:34.257253] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:55.112 [2024-07-10 13:44:34.257343] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:55.112 [2024-07-10 13:44:34.257514] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.049 "name": "raid_bdev1", 00:20:56.049 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:56.049 "strip_size_kb": 0, 00:20:56.049 "state": "online", 00:20:56.049 "raid_level": "raid1", 00:20:56.049 "superblock": true, 00:20:56.049 "num_base_bdevs": 2, 00:20:56.049 "num_base_bdevs_discovered": 2, 00:20:56.049 "num_base_bdevs_operational": 2, 00:20:56.049 "base_bdevs_list": [ 00:20:56.049 { 00:20:56.049 "name": "spare", 00:20:56.049 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:56.049 "is_configured": true, 00:20:56.049 "data_offset": 2048, 00:20:56.049 "data_size": 63488 00:20:56.049 }, 00:20:56.049 { 00:20:56.049 "name": "BaseBdev2", 00:20:56.049 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:56.049 "is_configured": true, 00:20:56.049 "data_offset": 2048, 00:20:56.049 "data_size": 63488 00:20:56.049 } 00:20:56.049 ] 00:20:56.049 }' 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@660 -- # break 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.049 13:44:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.309 "name": "raid_bdev1", 00:20:56.309 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:56.309 "strip_size_kb": 0, 00:20:56.309 "state": "online", 00:20:56.309 "raid_level": "raid1", 00:20:56.309 "superblock": true, 00:20:56.309 "num_base_bdevs": 2, 00:20:56.309 "num_base_bdevs_discovered": 2, 00:20:56.309 "num_base_bdevs_operational": 2, 00:20:56.309 "base_bdevs_list": [ 00:20:56.309 { 00:20:56.309 "name": "spare", 00:20:56.309 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:56.309 "is_configured": true, 00:20:56.309 "data_offset": 2048, 00:20:56.309 "data_size": 63488 00:20:56.309 }, 00:20:56.309 { 00:20:56.309 "name": "BaseBdev2", 00:20:56.309 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:56.309 "is_configured": true, 00:20:56.309 "data_offset": 2048, 00:20:56.309 "data_size": 63488 00:20:56.309 } 00:20:56.309 ] 00:20:56.309 }' 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.309 13:44:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.569 13:44:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.569 "name": "raid_bdev1", 00:20:56.569 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:56.569 "strip_size_kb": 0, 00:20:56.569 "state": "online", 00:20:56.569 "raid_level": "raid1", 00:20:56.569 "superblock": true, 00:20:56.569 "num_base_bdevs": 2, 00:20:56.569 "num_base_bdevs_discovered": 2, 00:20:56.569 "num_base_bdevs_operational": 2, 00:20:56.569 "base_bdevs_list": [ 00:20:56.569 { 00:20:56.569 "name": "spare", 00:20:56.569 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:56.569 "is_configured": true, 00:20:56.569 "data_offset": 2048, 00:20:56.569 "data_size": 63488 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "name": "BaseBdev2", 00:20:56.569 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:56.569 "is_configured": true, 00:20:56.569 "data_offset": 2048, 00:20:56.569 "data_size": 63488 00:20:56.569 } 00:20:56.569 ] 00:20:56.569 }' 00:20:56.569 13:44:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.569 13:44:35 -- common/autotest_common.sh@10 -- # set +x 00:20:57.138 13:44:36 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:57.397 [2024-07-10 13:44:36.620632] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:57.398 [2024-07-10 13:44:36.620672] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.398 [2024-07-10 13:44:36.620759] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.398 [2024-07-10 13:44:36.620832] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.398 [2024-07-10 13:44:36.620841] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:20:57.398 13:44:36 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.398 13:44:36 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:57.657 13:44:36 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:57.657 13:44:36 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:57.657 13:44:36 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@12 -- # local i 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:57.657 13:44:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:57.657 /dev/nbd0 00:20:57.657 13:44:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:57.917 13:44:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:57.917 13:44:37 -- common/autotest_common.sh@857 -- # local i 00:20:57.917 13:44:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:57.917 13:44:37 -- common/autotest_common.sh@861 -- # break 00:20:57.917 13:44:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.917 1+0 records in 00:20:57.917 1+0 records out 00:20:57.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486191 s, 8.4 MB/s 00:20:57.917 13:44:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.917 13:44:37 -- common/autotest_common.sh@874 -- # size=4096 00:20:57.917 13:44:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.917 13:44:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:57.917 13:44:37 -- common/autotest_common.sh@877 -- # return 0 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:57.917 /dev/nbd1 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:57.917 13:44:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:57.917 13:44:37 -- common/autotest_common.sh@857 -- # local i 00:20:57.917 13:44:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:57.917 13:44:37 -- common/autotest_common.sh@861 -- # break 00:20:57.917 13:44:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:57.917 13:44:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.917 1+0 records in 00:20:57.917 1+0 records out 00:20:57.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053319 s, 7.7 MB/s 00:20:57.917 13:44:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.917 13:44:37 -- common/autotest_common.sh@874 -- # size=4096 00:20:57.917 13:44:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.917 13:44:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:57.917 13:44:37 -- common/autotest_common.sh@877 -- # return 0 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.917 13:44:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:57.917 13:44:37 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:58.176 13:44:37 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:58.176 13:44:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.176 13:44:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:58.176 13:44:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.176 13:44:37 -- bdev/nbd_common.sh@51 -- # local i 00:20:58.176 13:44:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.176 13:44:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@41 -- # break 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.435 13:44:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:58.695 13:44:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:58.695 13:44:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:58.695 13:44:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.695 13:44:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:58.695 13:44:38 -- bdev/nbd_common.sh@41 -- # break 00:20:58.695 13:44:38 -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.695 13:44:38 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:58.695 13:44:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:58.695 13:44:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:58.695 13:44:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:58.954 13:44:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:59.214 [2024-07-10 13:44:38.345425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:59.214 [2024-07-10 13:44:38.345521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.214 [2024-07-10 13:44:38.345548] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:59.214 [2024-07-10 13:44:38.345567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.214 [2024-07-10 13:44:38.347338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.214 [2024-07-10 13:44:38.347400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:59.214 [2024-07-10 13:44:38.347506] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:59.214 [2024-07-10 13:44:38.347555] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.214 BaseBdev1 00:20:59.214 13:44:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:59.214 13:44:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:59.214 13:44:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:59.214 13:44:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:59.474 [2024-07-10 13:44:38.700839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:59.474 [2024-07-10 13:44:38.700932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.474 [2024-07-10 13:44:38.700961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:59.474 [2024-07-10 13:44:38.701000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.474 [2024-07-10 13:44:38.701402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.474 [2024-07-10 13:44:38.701446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:59.474 [2024-07-10 13:44:38.701551] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:59.474 [2024-07-10 13:44:38.701565] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:59.474 [2024-07-10 13:44:38.701571] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.474 [2024-07-10 13:44:38.701592] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:20:59.474 [2024-07-10 13:44:38.701660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:59.474 BaseBdev2 00:20:59.474 13:44:38 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:59.734 13:44:38 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:59.734 [2024-07-10 13:44:39.015539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:59.734 [2024-07-10 13:44:39.015644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.734 [2024-07-10 13:44:39.015679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:59.734 [2024-07-10 13:44:39.015696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.734 [2024-07-10 13:44:39.016200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.734 [2024-07-10 13:44:39.016257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:59.734 [2024-07-10 13:44:39.016367] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:59.734 [2024-07-10 13:44:39.016404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:59.734 spare 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.734 13:44:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.993 [2024-07-10 13:44:39.116306] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:59.993 [2024-07-10 13:44:39.116330] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:59.993 [2024-07-10 13:44:39.116460] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:20:59.993 [2024-07-10 13:44:39.116832] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:59.993 [2024-07-10 13:44:39.116848] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:59.993 [2024-07-10 13:44:39.116977] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.993 13:44:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.993 "name": "raid_bdev1", 00:20:59.993 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:20:59.993 "strip_size_kb": 0, 00:20:59.993 "state": "online", 00:20:59.993 "raid_level": "raid1", 00:20:59.993 "superblock": true, 00:20:59.993 "num_base_bdevs": 2, 00:20:59.993 "num_base_bdevs_discovered": 2, 00:20:59.993 "num_base_bdevs_operational": 2, 00:20:59.993 "base_bdevs_list": [ 00:20:59.993 { 00:20:59.993 "name": "spare", 00:20:59.993 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:20:59.993 "is_configured": true, 00:20:59.993 "data_offset": 2048, 00:20:59.993 "data_size": 63488 00:20:59.993 }, 00:20:59.993 { 00:20:59.993 "name": "BaseBdev2", 00:20:59.993 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:20:59.993 "is_configured": true, 00:20:59.993 "data_offset": 2048, 00:20:59.993 "data_size": 63488 00:20:59.993 } 00:20:59.993 ] 00:20:59.993 }' 00:20:59.993 13:44:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.993 13:44:39 -- common/autotest_common.sh@10 -- # set +x 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.559 13:44:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.815 13:44:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:00.815 "name": "raid_bdev1", 00:21:00.815 "uuid": "fa3dd962-810f-44ad-a2b2-b7ab14967532", 00:21:00.815 "strip_size_kb": 0, 00:21:00.815 "state": "online", 00:21:00.815 "raid_level": "raid1", 00:21:00.815 "superblock": true, 00:21:00.815 "num_base_bdevs": 2, 00:21:00.815 "num_base_bdevs_discovered": 2, 00:21:00.815 "num_base_bdevs_operational": 2, 00:21:00.815 "base_bdevs_list": [ 00:21:00.815 { 00:21:00.815 "name": "spare", 00:21:00.815 "uuid": "fb283819-e6b9-58db-a3f0-88c62ebcecc3", 00:21:00.815 "is_configured": true, 00:21:00.815 "data_offset": 2048, 00:21:00.815 "data_size": 63488 00:21:00.815 }, 00:21:00.815 { 00:21:00.815 "name": "BaseBdev2", 00:21:00.815 "uuid": "75f0612e-2917-51ae-81e7-7eb4b3228b6c", 00:21:00.815 "is_configured": true, 00:21:00.815 "data_offset": 2048, 00:21:00.815 "data_size": 63488 00:21:00.815 } 00:21:00.815 ] 00:21:00.815 }' 00:21:00.815 13:44:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:00.815 13:44:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:00.815 13:44:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:00.816 13:44:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:00.816 13:44:40 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.816 13:44:40 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:01.072 13:44:40 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.072 13:44:40 -- bdev/bdev_raid.sh@709 -- # killprocess 126239 00:21:01.072 13:44:40 -- common/autotest_common.sh@926 -- # '[' -z 126239 ']' 00:21:01.072 13:44:40 -- common/autotest_common.sh@930 -- # kill -0 126239 00:21:01.072 13:44:40 -- common/autotest_common.sh@931 -- # uname 00:21:01.072 13:44:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:01.072 13:44:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126239 00:21:01.072 killing process with pid 126239 00:21:01.072 Received shutdown signal, test time was about 60.000000 seconds 00:21:01.072 00:21:01.072 Latency(us) 00:21:01.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.072 =================================================================================================================== 00:21:01.072 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.072 13:44:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:01.072 13:44:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:01.072 13:44:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126239' 00:21:01.072 13:44:40 -- common/autotest_common.sh@945 -- # kill 126239 00:21:01.072 13:44:40 -- common/autotest_common.sh@950 -- # wait 126239 00:21:01.072 [2024-07-10 13:44:40.282559] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:01.072 [2024-07-10 13:44:40.282634] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.072 [2024-07-10 13:44:40.282693] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.072 [2024-07-10 13:44:40.282705] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:01.330 [2024-07-10 13:44:40.550194] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:02.749 ************************************ 00:21:02.749 END TEST raid_rebuild_test_sb 00:21:02.749 ************************************ 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:02.749 00:21:02.749 real 0m21.918s 00:21:02.749 user 0m31.181s 00:21:02.749 sys 0m3.415s 00:21:02.749 13:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.749 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:21:02.749 13:44:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:02.749 13:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:02.749 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.749 ************************************ 00:21:02.749 START TEST raid_rebuild_test_io 00:21:02.749 ************************************ 00:21:02.749 13:44:41 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=126876 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126876 /var/tmp/spdk-raid.sock 00:21:02.749 13:44:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:02.749 13:44:41 -- common/autotest_common.sh@819 -- # '[' -z 126876 ']' 00:21:02.749 13:44:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:02.749 13:44:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.749 13:44:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:02.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:02.749 13:44:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.749 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.749 [2024-07-10 13:44:41.872342] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:02.749 [2024-07-10 13:44:41.872570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126876 ] 00:21:02.749 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:02.749 Zero copy mechanism will not be used. 00:21:02.749 [2024-07-10 13:44:42.011131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.006 [2024-07-10 13:44:42.200759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.263 [2024-07-10 13:44:42.392850] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:03.521 13:44:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.521 13:44:42 -- common/autotest_common.sh@852 -- # return 0 00:21:03.521 13:44:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:03.521 13:44:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:03.521 13:44:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:03.521 BaseBdev1 00:21:03.521 13:44:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:03.521 13:44:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:03.521 13:44:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:03.778 BaseBdev2 00:21:03.778 13:44:43 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:04.035 spare_malloc 00:21:04.035 13:44:43 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:04.292 spare_delay 00:21:04.292 13:44:43 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:04.292 [2024-07-10 13:44:43.589979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:04.292 [2024-07-10 13:44:43.590065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.292 [2024-07-10 13:44:43.590095] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:04.292 [2024-07-10 13:44:43.590128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.292 [2024-07-10 13:44:43.591935] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.292 [2024-07-10 13:44:43.591978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:04.292 spare 00:21:04.292 13:44:43 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:04.549 [2024-07-10 13:44:43.745766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.549 [2024-07-10 13:44:43.747246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.549 [2024-07-10 13:44:43.747322] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:21:04.549 [2024-07-10 13:44:43.747334] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:04.549 [2024-07-10 13:44:43.747494] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:04.549 [2024-07-10 13:44:43.747776] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:21:04.549 [2024-07-10 13:44:43.747794] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:21:04.549 [2024-07-10 13:44:43.747929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.549 13:44:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.809 13:44:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.809 "name": "raid_bdev1", 00:21:04.809 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:04.809 "strip_size_kb": 0, 00:21:04.809 "state": "online", 00:21:04.809 "raid_level": "raid1", 00:21:04.809 "superblock": false, 00:21:04.809 "num_base_bdevs": 2, 00:21:04.809 "num_base_bdevs_discovered": 2, 00:21:04.809 "num_base_bdevs_operational": 2, 00:21:04.809 "base_bdevs_list": [ 00:21:04.809 { 00:21:04.809 "name": "BaseBdev1", 00:21:04.809 "uuid": "a726d64e-2096-44ca-9c7f-f5d0a649b9a9", 00:21:04.809 "is_configured": true, 00:21:04.809 "data_offset": 0, 00:21:04.809 "data_size": 65536 00:21:04.809 }, 00:21:04.809 { 00:21:04.809 "name": "BaseBdev2", 00:21:04.809 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:04.809 "is_configured": true, 00:21:04.809 "data_offset": 0, 00:21:04.809 "data_size": 65536 00:21:04.809 } 00:21:04.809 ] 00:21:04.809 }' 00:21:04.809 13:44:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.809 13:44:43 -- common/autotest_common.sh@10 -- # set +x 00:21:05.377 13:44:44 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:05.377 13:44:44 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:05.637 [2024-07-10 13:44:44.736197] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:05.637 13:44:44 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:05.897 [2024-07-10 13:44:45.017879] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:05.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:05.897 Zero copy mechanism will not be used. 00:21:05.897 Running I/O for 60 seconds... 00:21:05.897 [2024-07-10 13:44:45.100055] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:05.897 [2024-07-10 13:44:45.105970] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.897 13:44:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.156 13:44:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.156 "name": "raid_bdev1", 00:21:06.156 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:06.156 "strip_size_kb": 0, 00:21:06.156 "state": "online", 00:21:06.156 "raid_level": "raid1", 00:21:06.156 "superblock": false, 00:21:06.156 "num_base_bdevs": 2, 00:21:06.156 "num_base_bdevs_discovered": 1, 00:21:06.156 "num_base_bdevs_operational": 1, 00:21:06.156 "base_bdevs_list": [ 00:21:06.156 { 00:21:06.156 "name": null, 00:21:06.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.156 "is_configured": false, 00:21:06.156 "data_offset": 0, 00:21:06.156 "data_size": 65536 00:21:06.156 }, 00:21:06.156 { 00:21:06.156 "name": "BaseBdev2", 00:21:06.156 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:06.156 "is_configured": true, 00:21:06.156 "data_offset": 0, 00:21:06.156 "data_size": 65536 00:21:06.156 } 00:21:06.156 ] 00:21:06.156 }' 00:21:06.156 13:44:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.156 13:44:45 -- common/autotest_common.sh@10 -- # set +x 00:21:06.724 13:44:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:06.982 [2024-07-10 13:44:46.111625] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:06.982 [2024-07-10 13:44:46.111677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.982 [2024-07-10 13:44:46.155082] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:06.982 [2024-07-10 13:44:46.156813] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.982 13:44:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:06.982 [2024-07-10 13:44:46.277219] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:06.982 [2024-07-10 13:44:46.277895] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:07.241 [2024-07-10 13:44:46.502426] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:07.241 [2024-07-10 13:44:46.502787] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:07.501 [2024-07-10 13:44:46.729595] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:07.501 [2024-07-10 13:44:46.852510] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.068 13:44:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.069 [2024-07-10 13:44:47.300588] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:08.069 [2024-07-10 13:44:47.300899] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:08.069 13:44:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.069 "name": "raid_bdev1", 00:21:08.069 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:08.069 "strip_size_kb": 0, 00:21:08.069 "state": "online", 00:21:08.069 "raid_level": "raid1", 00:21:08.069 "superblock": false, 00:21:08.069 "num_base_bdevs": 2, 00:21:08.069 "num_base_bdevs_discovered": 2, 00:21:08.069 "num_base_bdevs_operational": 2, 00:21:08.069 "process": { 00:21:08.069 "type": "rebuild", 00:21:08.069 "target": "spare", 00:21:08.069 "progress": { 00:21:08.069 "blocks": 16384, 00:21:08.069 "percent": 25 00:21:08.069 } 00:21:08.069 }, 00:21:08.069 "base_bdevs_list": [ 00:21:08.069 { 00:21:08.069 "name": "spare", 00:21:08.069 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:08.069 "is_configured": true, 00:21:08.069 "data_offset": 0, 00:21:08.069 "data_size": 65536 00:21:08.069 }, 00:21:08.069 { 00:21:08.069 "name": "BaseBdev2", 00:21:08.069 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:08.069 "is_configured": true, 00:21:08.069 "data_offset": 0, 00:21:08.069 "data_size": 65536 00:21:08.069 } 00:21:08.069 ] 00:21:08.069 }' 00:21:08.069 13:44:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.069 13:44:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.069 13:44:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.328 13:44:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.328 13:44:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:08.328 [2024-07-10 13:44:47.627899] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:08.328 [2024-07-10 13:44:47.639649] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.586 [2024-07-10 13:44:47.742730] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:08.586 [2024-07-10 13:44:47.842621] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:08.586 [2024-07-10 13:44:47.856822] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.586 [2024-07-10 13:44:47.893504] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.586 13:44:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.844 13:44:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.844 "name": "raid_bdev1", 00:21:08.844 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:08.844 "strip_size_kb": 0, 00:21:08.844 "state": "online", 00:21:08.844 "raid_level": "raid1", 00:21:08.844 "superblock": false, 00:21:08.844 "num_base_bdevs": 2, 00:21:08.844 "num_base_bdevs_discovered": 1, 00:21:08.844 "num_base_bdevs_operational": 1, 00:21:08.844 "base_bdevs_list": [ 00:21:08.844 { 00:21:08.844 "name": null, 00:21:08.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.844 "is_configured": false, 00:21:08.844 "data_offset": 0, 00:21:08.844 "data_size": 65536 00:21:08.845 }, 00:21:08.845 { 00:21:08.845 "name": "BaseBdev2", 00:21:08.845 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:08.845 "is_configured": true, 00:21:08.845 "data_offset": 0, 00:21:08.845 "data_size": 65536 00:21:08.845 } 00:21:08.845 ] 00:21:08.845 }' 00:21:08.845 13:44:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.845 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.779 13:44:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.779 13:44:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.779 "name": "raid_bdev1", 00:21:09.779 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:09.779 "strip_size_kb": 0, 00:21:09.779 "state": "online", 00:21:09.779 "raid_level": "raid1", 00:21:09.779 "superblock": false, 00:21:09.779 "num_base_bdevs": 2, 00:21:09.779 "num_base_bdevs_discovered": 1, 00:21:09.779 "num_base_bdevs_operational": 1, 00:21:09.779 "base_bdevs_list": [ 00:21:09.779 { 00:21:09.779 "name": null, 00:21:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.779 "is_configured": false, 00:21:09.779 "data_offset": 0, 00:21:09.779 "data_size": 65536 00:21:09.779 }, 00:21:09.779 { 00:21:09.779 "name": "BaseBdev2", 00:21:09.779 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:09.779 "is_configured": true, 00:21:09.779 "data_offset": 0, 00:21:09.779 "data_size": 65536 00:21:09.779 } 00:21:09.779 ] 00:21:09.779 }' 00:21:09.779 13:44:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.779 13:44:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:09.779 13:44:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.779 13:44:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:09.779 13:44:49 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.038 [2024-07-10 13:44:49.294780] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:10.038 [2024-07-10 13:44:49.294832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.038 13:44:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:10.038 [2024-07-10 13:44:49.351328] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:10.038 [2024-07-10 13:44:49.353092] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:10.295 [2024-07-10 13:44:49.473906] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:10.295 [2024-07-10 13:44:49.474377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:10.553 [2024-07-10 13:44:49.683527] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:10.553 [2024-07-10 13:44:49.683815] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:10.812 [2024-07-10 13:44:50.018601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:11.070 [2024-07-10 13:44:50.233448] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:11.070 [2024-07-10 13:44:50.233778] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.070 13:44:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.331 "name": "raid_bdev1", 00:21:11.331 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:11.331 "strip_size_kb": 0, 00:21:11.331 "state": "online", 00:21:11.331 "raid_level": "raid1", 00:21:11.331 "superblock": false, 00:21:11.331 "num_base_bdevs": 2, 00:21:11.331 "num_base_bdevs_discovered": 2, 00:21:11.331 "num_base_bdevs_operational": 2, 00:21:11.331 "process": { 00:21:11.331 "type": "rebuild", 00:21:11.331 "target": "spare", 00:21:11.331 "progress": { 00:21:11.331 "blocks": 12288, 00:21:11.331 "percent": 18 00:21:11.331 } 00:21:11.331 }, 00:21:11.331 "base_bdevs_list": [ 00:21:11.331 { 00:21:11.331 "name": "spare", 00:21:11.331 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:11.331 "is_configured": true, 00:21:11.331 "data_offset": 0, 00:21:11.331 "data_size": 65536 00:21:11.331 }, 00:21:11.331 { 00:21:11.331 "name": "BaseBdev2", 00:21:11.331 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:11.331 "is_configured": true, 00:21:11.331 "data_offset": 0, 00:21:11.331 "data_size": 65536 00:21:11.331 } 00:21:11.331 ] 00:21:11.331 }' 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.331 [2024-07-10 13:44:50.561031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@657 -- # local timeout=405 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.331 13:44:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.331 [2024-07-10 13:44:50.669157] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:11.331 [2024-07-10 13:44:50.669442] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:11.591 13:44:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.591 "name": "raid_bdev1", 00:21:11.591 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:11.591 "strip_size_kb": 0, 00:21:11.591 "state": "online", 00:21:11.591 "raid_level": "raid1", 00:21:11.591 "superblock": false, 00:21:11.591 "num_base_bdevs": 2, 00:21:11.591 "num_base_bdevs_discovered": 2, 00:21:11.591 "num_base_bdevs_operational": 2, 00:21:11.591 "process": { 00:21:11.591 "type": "rebuild", 00:21:11.591 "target": "spare", 00:21:11.591 "progress": { 00:21:11.591 "blocks": 18432, 00:21:11.591 "percent": 28 00:21:11.591 } 00:21:11.591 }, 00:21:11.591 "base_bdevs_list": [ 00:21:11.591 { 00:21:11.591 "name": "spare", 00:21:11.591 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:11.591 "is_configured": true, 00:21:11.591 "data_offset": 0, 00:21:11.591 "data_size": 65536 00:21:11.591 }, 00:21:11.591 { 00:21:11.591 "name": "BaseBdev2", 00:21:11.591 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:11.591 "is_configured": true, 00:21:11.591 "data_offset": 0, 00:21:11.591 "data_size": 65536 00:21:11.591 } 00:21:11.591 ] 00:21:11.591 }' 00:21:11.591 13:44:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.591 13:44:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.591 13:44:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.591 13:44:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.591 13:44:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:11.591 [2024-07-10 13:44:50.933751] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:11.591 [2024-07-10 13:44:50.934199] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:11.849 [2024-07-10 13:44:51.161775] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:11.849 [2024-07-10 13:44:51.162086] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:12.417 [2024-07-10 13:44:51.482785] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.677 13:44:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.677 [2024-07-10 13:44:51.928745] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:12.936 13:44:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:12.936 "name": "raid_bdev1", 00:21:12.936 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:12.936 "strip_size_kb": 0, 00:21:12.936 "state": "online", 00:21:12.936 "raid_level": "raid1", 00:21:12.936 "superblock": false, 00:21:12.936 "num_base_bdevs": 2, 00:21:12.936 "num_base_bdevs_discovered": 2, 00:21:12.936 "num_base_bdevs_operational": 2, 00:21:12.936 "process": { 00:21:12.936 "type": "rebuild", 00:21:12.936 "target": "spare", 00:21:12.936 "progress": { 00:21:12.936 "blocks": 32768, 00:21:12.936 "percent": 50 00:21:12.936 } 00:21:12.936 }, 00:21:12.936 "base_bdevs_list": [ 00:21:12.936 { 00:21:12.936 "name": "spare", 00:21:12.936 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:12.936 "is_configured": true, 00:21:12.936 "data_offset": 0, 00:21:12.936 "data_size": 65536 00:21:12.936 }, 00:21:12.936 { 00:21:12.936 "name": "BaseBdev2", 00:21:12.936 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:12.936 "is_configured": true, 00:21:12.936 "data_offset": 0, 00:21:12.936 "data_size": 65536 00:21:12.936 } 00:21:12.936 ] 00:21:12.936 }' 00:21:12.936 13:44:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:12.937 [2024-07-10 13:44:52.157239] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:12.937 13:44:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.937 13:44:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:12.937 13:44:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.937 13:44:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.940 13:44:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.940 [2024-07-10 13:44:53.276914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:14.202 13:44:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.202 "name": "raid_bdev1", 00:21:14.202 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:14.202 "strip_size_kb": 0, 00:21:14.202 "state": "online", 00:21:14.202 "raid_level": "raid1", 00:21:14.202 "superblock": false, 00:21:14.202 "num_base_bdevs": 2, 00:21:14.202 "num_base_bdevs_discovered": 2, 00:21:14.202 "num_base_bdevs_operational": 2, 00:21:14.202 "process": { 00:21:14.202 "type": "rebuild", 00:21:14.202 "target": "spare", 00:21:14.202 "progress": { 00:21:14.202 "blocks": 55296, 00:21:14.202 "percent": 84 00:21:14.202 } 00:21:14.202 }, 00:21:14.202 "base_bdevs_list": [ 00:21:14.202 { 00:21:14.202 "name": "spare", 00:21:14.202 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:14.202 "is_configured": true, 00:21:14.202 "data_offset": 0, 00:21:14.202 "data_size": 65536 00:21:14.202 }, 00:21:14.202 { 00:21:14.202 "name": "BaseBdev2", 00:21:14.202 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:14.202 "is_configured": true, 00:21:14.202 "data_offset": 0, 00:21:14.202 "data_size": 65536 00:21:14.202 } 00:21:14.202 ] 00:21:14.202 }' 00:21:14.202 13:44:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.202 13:44:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.202 13:44:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.202 13:44:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.202 13:44:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:14.768 [2024-07-10 13:44:53.926863] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:14.768 [2024-07-10 13:44:54.032239] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:14.768 [2024-07-10 13:44:54.035422] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.334 13:44:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.591 13:44:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:15.591 "name": "raid_bdev1", 00:21:15.591 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:15.591 "strip_size_kb": 0, 00:21:15.591 "state": "online", 00:21:15.591 "raid_level": "raid1", 00:21:15.592 "superblock": false, 00:21:15.592 "num_base_bdevs": 2, 00:21:15.592 "num_base_bdevs_discovered": 2, 00:21:15.592 "num_base_bdevs_operational": 2, 00:21:15.592 "base_bdevs_list": [ 00:21:15.592 { 00:21:15.592 "name": "spare", 00:21:15.592 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "name": "BaseBdev2", 00:21:15.592 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 } 00:21:15.592 ] 00:21:15.592 }' 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@660 -- # break 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.592 13:44:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.850 13:44:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:15.850 "name": "raid_bdev1", 00:21:15.850 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:15.850 "strip_size_kb": 0, 00:21:15.850 "state": "online", 00:21:15.850 "raid_level": "raid1", 00:21:15.850 "superblock": false, 00:21:15.850 "num_base_bdevs": 2, 00:21:15.850 "num_base_bdevs_discovered": 2, 00:21:15.850 "num_base_bdevs_operational": 2, 00:21:15.850 "base_bdevs_list": [ 00:21:15.850 { 00:21:15.850 "name": "spare", 00:21:15.850 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:15.850 "is_configured": true, 00:21:15.850 "data_offset": 0, 00:21:15.850 "data_size": 65536 00:21:15.850 }, 00:21:15.850 { 00:21:15.850 "name": "BaseBdev2", 00:21:15.850 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:15.850 "is_configured": true, 00:21:15.850 "data_offset": 0, 00:21:15.850 "data_size": 65536 00:21:15.850 } 00:21:15.850 ] 00:21:15.850 }' 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.850 13:44:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.108 13:44:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.108 "name": "raid_bdev1", 00:21:16.108 "uuid": "e3128e25-7d82-43e3-8874-30671b54976b", 00:21:16.108 "strip_size_kb": 0, 00:21:16.108 "state": "online", 00:21:16.108 "raid_level": "raid1", 00:21:16.108 "superblock": false, 00:21:16.108 "num_base_bdevs": 2, 00:21:16.108 "num_base_bdevs_discovered": 2, 00:21:16.108 "num_base_bdevs_operational": 2, 00:21:16.108 "base_bdevs_list": [ 00:21:16.108 { 00:21:16.108 "name": "spare", 00:21:16.108 "uuid": "965849fc-8d9e-5a65-8cfe-0c0d1e1f79fa", 00:21:16.108 "is_configured": true, 00:21:16.108 "data_offset": 0, 00:21:16.108 "data_size": 65536 00:21:16.108 }, 00:21:16.108 { 00:21:16.108 "name": "BaseBdev2", 00:21:16.108 "uuid": "7333c734-144b-4cbe-bfb4-34f0156b45b4", 00:21:16.108 "is_configured": true, 00:21:16.108 "data_offset": 0, 00:21:16.108 "data_size": 65536 00:21:16.108 } 00:21:16.108 ] 00:21:16.108 }' 00:21:16.108 13:44:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.108 13:44:55 -- common/autotest_common.sh@10 -- # set +x 00:21:16.674 13:44:55 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:16.933 [2024-07-10 13:44:56.060627] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.933 [2024-07-10 13:44:56.060667] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.933 00:21:16.933 Latency(us) 00:21:16.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.933 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:16.933 raid_bdev1 : 11.08 114.91 344.73 0.00 0.00 11943.98 341.63 114015.47 00:21:16.933 =================================================================================================================== 00:21:16.933 Total : 114.91 344.73 0.00 0.00 11943.98 341.63 114015.47 00:21:16.933 [2024-07-10 13:44:56.096419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.933 [2024-07-10 13:44:56.096462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.933 [2024-07-10 13:44:56.096522] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.933 [2024-07-10 13:44:56.096530] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:21:16.933 0 00:21:16.933 13:44:56 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.933 13:44:56 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:17.192 13:44:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:17.192 13:44:56 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:17.192 13:44:56 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@12 -- # local i 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:17.192 /dev/nbd0 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:17.192 13:44:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:17.192 13:44:56 -- common/autotest_common.sh@857 -- # local i 00:21:17.192 13:44:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:17.192 13:44:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:17.192 13:44:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:17.192 13:44:56 -- common/autotest_common.sh@861 -- # break 00:21:17.192 13:44:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:17.192 13:44:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:17.192 13:44:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.192 1+0 records in 00:21:17.192 1+0 records out 00:21:17.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468416 s, 8.7 MB/s 00:21:17.192 13:44:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.192 13:44:56 -- common/autotest_common.sh@874 -- # size=4096 00:21:17.192 13:44:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.192 13:44:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:17.192 13:44:56 -- common/autotest_common.sh@877 -- # return 0 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:17.192 13:44:56 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:17.192 13:44:56 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:17.192 13:44:56 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@12 -- # local i 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:17.192 13:44:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:17.452 /dev/nbd1 00:21:17.452 13:44:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:17.452 13:44:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:17.452 13:44:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:17.452 13:44:56 -- common/autotest_common.sh@857 -- # local i 00:21:17.452 13:44:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:17.452 13:44:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:17.452 13:44:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:17.452 13:44:56 -- common/autotest_common.sh@861 -- # break 00:21:17.453 13:44:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:17.453 13:44:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:17.453 13:44:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.453 1+0 records in 00:21:17.453 1+0 records out 00:21:17.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189669 s, 21.6 MB/s 00:21:17.453 13:44:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.453 13:44:56 -- common/autotest_common.sh@874 -- # size=4096 00:21:17.453 13:44:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.453 13:44:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:17.453 13:44:56 -- common/autotest_common.sh@877 -- # return 0 00:21:17.453 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:17.453 13:44:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:17.453 13:44:56 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:17.711 13:44:56 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:17.711 13:44:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:17.711 13:44:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:17.711 13:44:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:17.711 13:44:56 -- bdev/nbd_common.sh@51 -- # local i 00:21:17.711 13:44:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:17.711 13:44:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@41 -- # break 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@45 -- # return 0 00:21:17.969 13:44:57 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@51 -- # local i 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:17.969 13:44:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@41 -- # break 00:21:18.227 13:44:57 -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.227 13:44:57 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:18.227 13:44:57 -- bdev/bdev_raid.sh@709 -- # killprocess 126876 00:21:18.227 13:44:57 -- common/autotest_common.sh@926 -- # '[' -z 126876 ']' 00:21:18.227 13:44:57 -- common/autotest_common.sh@930 -- # kill -0 126876 00:21:18.227 13:44:57 -- common/autotest_common.sh@931 -- # uname 00:21:18.227 13:44:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.227 13:44:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126876 00:21:18.227 13:44:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:18.227 13:44:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:18.227 13:44:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126876' 00:21:18.227 killing process with pid 126876 00:21:18.227 13:44:57 -- common/autotest_common.sh@945 -- # kill 126876 00:21:18.227 Received shutdown signal, test time was about 12.549421 seconds 00:21:18.227 00:21:18.227 Latency(us) 00:21:18.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.227 =================================================================================================================== 00:21:18.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.227 [2024-07-10 13:44:57.544924] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.227 13:44:57 -- common/autotest_common.sh@950 -- # wait 126876 00:21:18.485 [2024-07-10 13:44:57.757715] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:19.860 ************************************ 00:21:19.860 END TEST raid_rebuild_test_io 00:21:19.860 ************************************ 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:19.860 00:21:19.860 real 0m17.214s 00:21:19.860 user 0m25.512s 00:21:19.860 sys 0m1.818s 00:21:19.860 13:44:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.860 13:44:59 -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:19.860 13:44:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:19.860 13:44:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:19.860 13:44:59 -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 ************************************ 00:21:19.860 START TEST raid_rebuild_test_sb_io 00:21:19.860 ************************************ 00:21:19.860 13:44:59 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@544 -- # raid_pid=127377 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127377 /var/tmp/spdk-raid.sock 00:21:19.860 13:44:59 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:19.860 13:44:59 -- common/autotest_common.sh@819 -- # '[' -z 127377 ']' 00:21:19.860 13:44:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:19.860 13:44:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:19.860 13:44:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:19.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:19.860 13:44:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:19.860 13:44:59 -- common/autotest_common.sh@10 -- # set +x 00:21:19.860 [2024-07-10 13:44:59.155627] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:19.860 [2024-07-10 13:44:59.155778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127377 ] 00:21:19.860 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:19.860 Zero copy mechanism will not be used. 00:21:20.120 [2024-07-10 13:44:59.313228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.384 [2024-07-10 13:44:59.499548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.384 [2024-07-10 13:44:59.692970] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.647 13:44:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:20.647 13:44:59 -- common/autotest_common.sh@852 -- # return 0 00:21:20.647 13:44:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:20.647 13:44:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:20.647 13:44:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:20.907 BaseBdev1_malloc 00:21:20.907 13:45:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:21.166 [2024-07-10 13:45:00.340618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:21.166 [2024-07-10 13:45:00.340712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.166 [2024-07-10 13:45:00.340738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:21.166 [2024-07-10 13:45:00.340769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.166 [2024-07-10 13:45:00.342776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.166 [2024-07-10 13:45:00.342821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.166 BaseBdev1 00:21:21.166 13:45:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:21.166 13:45:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:21.166 13:45:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:21.426 BaseBdev2_malloc 00:21:21.426 13:45:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:21.426 [2024-07-10 13:45:00.763197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:21.426 [2024-07-10 13:45:00.763289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.426 [2024-07-10 13:45:00.763324] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:21.426 [2024-07-10 13:45:00.763363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.426 [2024-07-10 13:45:00.765301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.426 [2024-07-10 13:45:00.765345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:21.426 BaseBdev2 00:21:21.426 13:45:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:21.685 spare_malloc 00:21:21.685 13:45:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:21.945 spare_delay 00:21:21.945 13:45:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:22.204 [2024-07-10 13:45:01.340477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.205 [2024-07-10 13:45:01.340559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.205 [2024-07-10 13:45:01.340595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:22.205 [2024-07-10 13:45:01.340644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.205 [2024-07-10 13:45:01.342704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.205 [2024-07-10 13:45:01.342753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:22.205 spare 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:22.205 [2024-07-10 13:45:01.512261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.205 [2024-07-10 13:45:01.513779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:22.205 [2024-07-10 13:45:01.513953] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:22.205 [2024-07-10 13:45:01.513964] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:22.205 [2024-07-10 13:45:01.514074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:22.205 [2024-07-10 13:45:01.514375] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:22.205 [2024-07-10 13:45:01.514400] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:22.205 [2024-07-10 13:45:01.514550] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.205 13:45:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.464 13:45:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:22.464 "name": "raid_bdev1", 00:21:22.464 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:22.464 "strip_size_kb": 0, 00:21:22.464 "state": "online", 00:21:22.464 "raid_level": "raid1", 00:21:22.464 "superblock": true, 00:21:22.464 "num_base_bdevs": 2, 00:21:22.464 "num_base_bdevs_discovered": 2, 00:21:22.464 "num_base_bdevs_operational": 2, 00:21:22.464 "base_bdevs_list": [ 00:21:22.464 { 00:21:22.464 "name": "BaseBdev1", 00:21:22.464 "uuid": "d5ecbdb0-1cf3-5880-96e9-fe61adeeec88", 00:21:22.464 "is_configured": true, 00:21:22.464 "data_offset": 2048, 00:21:22.464 "data_size": 63488 00:21:22.464 }, 00:21:22.464 { 00:21:22.464 "name": "BaseBdev2", 00:21:22.464 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:22.464 "is_configured": true, 00:21:22.464 "data_offset": 2048, 00:21:22.464 "data_size": 63488 00:21:22.464 } 00:21:22.464 ] 00:21:22.464 }' 00:21:22.464 13:45:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:22.464 13:45:01 -- common/autotest_common.sh@10 -- # set +x 00:21:23.032 13:45:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:23.032 13:45:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:23.291 [2024-07-10 13:45:02.454916] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.291 13:45:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:23.291 13:45:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:23.291 13:45:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:23.550 [2024-07-10 13:45:02.753245] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:23.550 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:23.550 Zero copy mechanism will not be used. 00:21:23.550 Running I/O for 60 seconds... 00:21:23.550 [2024-07-10 13:45:02.864257] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:23.550 [2024-07-10 13:45:02.864462] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.550 13:45:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.818 13:45:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.818 "name": "raid_bdev1", 00:21:23.818 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:23.818 "strip_size_kb": 0, 00:21:23.818 "state": "online", 00:21:23.818 "raid_level": "raid1", 00:21:23.818 "superblock": true, 00:21:23.818 "num_base_bdevs": 2, 00:21:23.818 "num_base_bdevs_discovered": 1, 00:21:23.818 "num_base_bdevs_operational": 1, 00:21:23.818 "base_bdevs_list": [ 00:21:23.818 { 00:21:23.818 "name": null, 00:21:23.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.818 "is_configured": false, 00:21:23.818 "data_offset": 2048, 00:21:23.818 "data_size": 63488 00:21:23.818 }, 00:21:23.818 { 00:21:23.818 "name": "BaseBdev2", 00:21:23.818 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:23.818 "is_configured": true, 00:21:23.818 "data_offset": 2048, 00:21:23.818 "data_size": 63488 00:21:23.818 } 00:21:23.818 ] 00:21:23.818 }' 00:21:23.818 13:45:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.818 13:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.755 13:45:03 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.755 [2024-07-10 13:45:03.946600] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:24.755 [2024-07-10 13:45:03.946738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.755 13:45:03 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:24.755 [2024-07-10 13:45:04.005119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:24.755 [2024-07-10 13:45:04.006902] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.017 [2024-07-10 13:45:04.116912] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:25.017 [2024-07-10 13:45:04.117488] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:25.017 [2024-07-10 13:45:04.250281] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:25.017 [2024-07-10 13:45:04.250638] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:25.276 [2024-07-10 13:45:04.623959] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:25.276 [2024-07-10 13:45:04.624540] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:25.536 [2024-07-10 13:45:04.834740] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:25.536 [2024-07-10 13:45:04.835092] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.795 13:45:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.055 13:45:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.055 "name": "raid_bdev1", 00:21:26.055 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:26.055 "strip_size_kb": 0, 00:21:26.055 "state": "online", 00:21:26.055 "raid_level": "raid1", 00:21:26.055 "superblock": true, 00:21:26.055 "num_base_bdevs": 2, 00:21:26.055 "num_base_bdevs_discovered": 2, 00:21:26.055 "num_base_bdevs_operational": 2, 00:21:26.055 "process": { 00:21:26.055 "type": "rebuild", 00:21:26.055 "target": "spare", 00:21:26.055 "progress": { 00:21:26.055 "blocks": 14336, 00:21:26.055 "percent": 22 00:21:26.055 } 00:21:26.055 }, 00:21:26.055 "base_bdevs_list": [ 00:21:26.055 { 00:21:26.055 "name": "spare", 00:21:26.055 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:26.055 "is_configured": true, 00:21:26.055 "data_offset": 2048, 00:21:26.055 "data_size": 63488 00:21:26.055 }, 00:21:26.055 { 00:21:26.055 "name": "BaseBdev2", 00:21:26.055 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:26.055 "is_configured": true, 00:21:26.055 "data_offset": 2048, 00:21:26.055 "data_size": 63488 00:21:26.055 } 00:21:26.055 ] 00:21:26.055 }' 00:21:26.055 13:45:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.055 [2024-07-10 13:45:05.295769] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:26.055 13:45:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.055 13:45:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.055 13:45:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.055 13:45:05 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:26.315 [2024-07-10 13:45:05.557057] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:26.315 [2024-07-10 13:45:05.634812] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:26.315 [2024-07-10 13:45:05.657160] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:26.574 [2024-07-10 13:45:05.673037] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.574 [2024-07-10 13:45:05.717176] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.574 13:45:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.834 13:45:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.834 "name": "raid_bdev1", 00:21:26.834 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:26.834 "strip_size_kb": 0, 00:21:26.834 "state": "online", 00:21:26.834 "raid_level": "raid1", 00:21:26.834 "superblock": true, 00:21:26.834 "num_base_bdevs": 2, 00:21:26.834 "num_base_bdevs_discovered": 1, 00:21:26.834 "num_base_bdevs_operational": 1, 00:21:26.834 "base_bdevs_list": [ 00:21:26.834 { 00:21:26.834 "name": null, 00:21:26.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.834 "is_configured": false, 00:21:26.834 "data_offset": 2048, 00:21:26.834 "data_size": 63488 00:21:26.834 }, 00:21:26.834 { 00:21:26.834 "name": "BaseBdev2", 00:21:26.834 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:26.834 "is_configured": true, 00:21:26.834 "data_offset": 2048, 00:21:26.834 "data_size": 63488 00:21:26.834 } 00:21:26.834 ] 00:21:26.834 }' 00:21:26.834 13:45:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.834 13:45:05 -- common/autotest_common.sh@10 -- # set +x 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.405 13:45:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.665 13:45:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.665 "name": "raid_bdev1", 00:21:27.665 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:27.665 "strip_size_kb": 0, 00:21:27.665 "state": "online", 00:21:27.665 "raid_level": "raid1", 00:21:27.665 "superblock": true, 00:21:27.665 "num_base_bdevs": 2, 00:21:27.665 "num_base_bdevs_discovered": 1, 00:21:27.665 "num_base_bdevs_operational": 1, 00:21:27.665 "base_bdevs_list": [ 00:21:27.665 { 00:21:27.665 "name": null, 00:21:27.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.665 "is_configured": false, 00:21:27.665 "data_offset": 2048, 00:21:27.665 "data_size": 63488 00:21:27.665 }, 00:21:27.665 { 00:21:27.665 "name": "BaseBdev2", 00:21:27.665 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:27.665 "is_configured": true, 00:21:27.665 "data_offset": 2048, 00:21:27.665 "data_size": 63488 00:21:27.665 } 00:21:27.665 ] 00:21:27.665 }' 00:21:27.665 13:45:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.665 13:45:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:27.665 13:45:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.665 13:45:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:27.665 13:45:06 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:27.925 [2024-07-10 13:45:07.219982] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:27.925 [2024-07-10 13:45:07.220127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.925 13:45:07 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:27.925 [2024-07-10 13:45:07.275360] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:27.925 [2024-07-10 13:45:07.277275] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.184 [2024-07-10 13:45:07.400341] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:28.184 [2024-07-10 13:45:07.400955] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:28.443 [2024-07-10 13:45:07.616496] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:28.443 [2024-07-10 13:45:07.616889] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:28.702 [2024-07-10 13:45:07.843271] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:28.702 [2024-07-10 13:45:07.843844] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:28.702 [2024-07-10 13:45:07.954638] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.962 13:45:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.962 [2024-07-10 13:45:08.286040] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:28.962 [2024-07-10 13:45:08.286550] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:29.221 13:45:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.221 "name": "raid_bdev1", 00:21:29.221 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:29.221 "strip_size_kb": 0, 00:21:29.221 "state": "online", 00:21:29.221 "raid_level": "raid1", 00:21:29.221 "superblock": true, 00:21:29.221 "num_base_bdevs": 2, 00:21:29.221 "num_base_bdevs_discovered": 2, 00:21:29.221 "num_base_bdevs_operational": 2, 00:21:29.221 "process": { 00:21:29.221 "type": "rebuild", 00:21:29.221 "target": "spare", 00:21:29.221 "progress": { 00:21:29.221 "blocks": 14336, 00:21:29.221 "percent": 22 00:21:29.221 } 00:21:29.221 }, 00:21:29.221 "base_bdevs_list": [ 00:21:29.221 { 00:21:29.221 "name": "spare", 00:21:29.221 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:29.221 "is_configured": true, 00:21:29.221 "data_offset": 2048, 00:21:29.221 "data_size": 63488 00:21:29.221 }, 00:21:29.221 { 00:21:29.221 "name": "BaseBdev2", 00:21:29.221 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:29.221 "is_configured": true, 00:21:29.221 "data_offset": 2048, 00:21:29.221 "data_size": 63488 00:21:29.221 } 00:21:29.221 ] 00:21:29.221 }' 00:21:29.221 13:45:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.221 [2024-07-10 13:45:08.501835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:29.221 [2024-07-10 13:45:08.502187] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:29.221 13:45:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.221 13:45:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:29.481 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@657 -- # local timeout=423 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.481 "name": "raid_bdev1", 00:21:29.481 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:29.481 "strip_size_kb": 0, 00:21:29.481 "state": "online", 00:21:29.481 "raid_level": "raid1", 00:21:29.481 "superblock": true, 00:21:29.481 "num_base_bdevs": 2, 00:21:29.481 "num_base_bdevs_discovered": 2, 00:21:29.481 "num_base_bdevs_operational": 2, 00:21:29.481 "process": { 00:21:29.481 "type": "rebuild", 00:21:29.481 "target": "spare", 00:21:29.481 "progress": { 00:21:29.481 "blocks": 18432, 00:21:29.481 "percent": 29 00:21:29.481 } 00:21:29.481 }, 00:21:29.481 "base_bdevs_list": [ 00:21:29.481 { 00:21:29.481 "name": "spare", 00:21:29.481 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:29.481 "is_configured": true, 00:21:29.481 "data_offset": 2048, 00:21:29.481 "data_size": 63488 00:21:29.481 }, 00:21:29.481 { 00:21:29.481 "name": "BaseBdev2", 00:21:29.481 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:29.481 "is_configured": true, 00:21:29.481 "data_offset": 2048, 00:21:29.481 "data_size": 63488 00:21:29.481 } 00:21:29.481 ] 00:21:29.481 }' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.481 13:45:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.739 13:45:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.739 13:45:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:29.739 [2024-07-10 13:45:08.972436] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:29.998 [2024-07-10 13:45:09.292588] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:30.256 [2024-07-10 13:45:09.494487] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:30.515 [2024-07-10 13:45:09.728502] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.773 13:45:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.773 [2024-07-10 13:45:09.942369] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:30.773 [2024-07-10 13:45:09.942701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:30.773 13:45:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.773 "name": "raid_bdev1", 00:21:30.773 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:30.773 "strip_size_kb": 0, 00:21:30.773 "state": "online", 00:21:30.773 "raid_level": "raid1", 00:21:30.773 "superblock": true, 00:21:30.773 "num_base_bdevs": 2, 00:21:30.773 "num_base_bdevs_discovered": 2, 00:21:30.773 "num_base_bdevs_operational": 2, 00:21:30.773 "process": { 00:21:30.773 "type": "rebuild", 00:21:30.773 "target": "spare", 00:21:30.773 "progress": { 00:21:30.773 "blocks": 34816, 00:21:30.773 "percent": 54 00:21:30.773 } 00:21:30.773 }, 00:21:30.773 "base_bdevs_list": [ 00:21:30.773 { 00:21:30.773 "name": "spare", 00:21:30.773 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:30.773 "is_configured": true, 00:21:30.773 "data_offset": 2048, 00:21:30.773 "data_size": 63488 00:21:30.773 }, 00:21:30.773 { 00:21:30.773 "name": "BaseBdev2", 00:21:30.773 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:30.773 "is_configured": true, 00:21:30.773 "data_offset": 2048, 00:21:30.773 "data_size": 63488 00:21:30.773 } 00:21:30.773 ] 00:21:30.773 }' 00:21:30.773 13:45:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.045 13:45:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.045 13:45:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.045 13:45:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.045 13:45:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:31.046 [2024-07-10 13:45:10.287167] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:31.304 [2024-07-10 13:45:10.497963] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:31.304 [2024-07-10 13:45:10.498329] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:31.868 [2024-07-10 13:45:10.952067] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.868 13:45:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.125 [2024-07-10 13:45:11.290663] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:32.125 [2024-07-10 13:45:11.291055] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:32.125 13:45:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.125 "name": "raid_bdev1", 00:21:32.125 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:32.125 "strip_size_kb": 0, 00:21:32.125 "state": "online", 00:21:32.125 "raid_level": "raid1", 00:21:32.125 "superblock": true, 00:21:32.125 "num_base_bdevs": 2, 00:21:32.125 "num_base_bdevs_discovered": 2, 00:21:32.125 "num_base_bdevs_operational": 2, 00:21:32.125 "process": { 00:21:32.125 "type": "rebuild", 00:21:32.125 "target": "spare", 00:21:32.125 "progress": { 00:21:32.125 "blocks": 53248, 00:21:32.125 "percent": 83 00:21:32.125 } 00:21:32.125 }, 00:21:32.125 "base_bdevs_list": [ 00:21:32.125 { 00:21:32.125 "name": "spare", 00:21:32.125 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:32.125 "is_configured": true, 00:21:32.125 "data_offset": 2048, 00:21:32.125 "data_size": 63488 00:21:32.125 }, 00:21:32.125 { 00:21:32.125 "name": "BaseBdev2", 00:21:32.125 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:32.125 "is_configured": true, 00:21:32.125 "data_offset": 2048, 00:21:32.125 "data_size": 63488 00:21:32.125 } 00:21:32.125 ] 00:21:32.125 }' 00:21:32.125 13:45:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.125 13:45:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.382 13:45:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.382 13:45:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.382 13:45:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:32.382 [2024-07-10 13:45:11.626157] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:32.639 [2024-07-10 13:45:11.946924] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:32.897 [2024-07-10 13:45:12.046800] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:32.897 [2024-07-10 13:45:12.049485] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.464 "name": "raid_bdev1", 00:21:33.464 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:33.464 "strip_size_kb": 0, 00:21:33.464 "state": "online", 00:21:33.464 "raid_level": "raid1", 00:21:33.464 "superblock": true, 00:21:33.464 "num_base_bdevs": 2, 00:21:33.464 "num_base_bdevs_discovered": 2, 00:21:33.464 "num_base_bdevs_operational": 2, 00:21:33.464 "base_bdevs_list": [ 00:21:33.464 { 00:21:33.464 "name": "spare", 00:21:33.464 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:33.464 "is_configured": true, 00:21:33.464 "data_offset": 2048, 00:21:33.464 "data_size": 63488 00:21:33.464 }, 00:21:33.464 { 00:21:33.464 "name": "BaseBdev2", 00:21:33.464 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:33.464 "is_configured": true, 00:21:33.464 "data_offset": 2048, 00:21:33.464 "data_size": 63488 00:21:33.464 } 00:21:33.464 ] 00:21:33.464 }' 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:33.464 13:45:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@660 -- # break 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.723 13:45:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.723 13:45:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.723 "name": "raid_bdev1", 00:21:33.723 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:33.723 "strip_size_kb": 0, 00:21:33.723 "state": "online", 00:21:33.723 "raid_level": "raid1", 00:21:33.723 "superblock": true, 00:21:33.723 "num_base_bdevs": 2, 00:21:33.723 "num_base_bdevs_discovered": 2, 00:21:33.723 "num_base_bdevs_operational": 2, 00:21:33.723 "base_bdevs_list": [ 00:21:33.723 { 00:21:33.723 "name": "spare", 00:21:33.723 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:33.723 "is_configured": true, 00:21:33.723 "data_offset": 2048, 00:21:33.723 "data_size": 63488 00:21:33.723 }, 00:21:33.723 { 00:21:33.723 "name": "BaseBdev2", 00:21:33.723 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:33.723 "is_configured": true, 00:21:33.723 "data_offset": 2048, 00:21:33.723 "data_size": 63488 00:21:33.723 } 00:21:33.723 ] 00:21:33.723 }' 00:21:33.723 13:45:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.982 13:45:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.240 13:45:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.240 "name": "raid_bdev1", 00:21:34.240 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:34.240 "strip_size_kb": 0, 00:21:34.240 "state": "online", 00:21:34.240 "raid_level": "raid1", 00:21:34.240 "superblock": true, 00:21:34.240 "num_base_bdevs": 2, 00:21:34.240 "num_base_bdevs_discovered": 2, 00:21:34.240 "num_base_bdevs_operational": 2, 00:21:34.240 "base_bdevs_list": [ 00:21:34.240 { 00:21:34.240 "name": "spare", 00:21:34.240 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:34.240 "is_configured": true, 00:21:34.240 "data_offset": 2048, 00:21:34.240 "data_size": 63488 00:21:34.240 }, 00:21:34.240 { 00:21:34.240 "name": "BaseBdev2", 00:21:34.240 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:34.240 "is_configured": true, 00:21:34.240 "data_offset": 2048, 00:21:34.240 "data_size": 63488 00:21:34.240 } 00:21:34.240 ] 00:21:34.241 }' 00:21:34.241 13:45:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.241 13:45:13 -- common/autotest_common.sh@10 -- # set +x 00:21:34.807 13:45:13 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:34.807 [2024-07-10 13:45:14.154851] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.807 [2024-07-10 13:45:14.154957] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.067 00:21:35.067 Latency(us) 00:21:35.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.067 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:35.067 raid_bdev1 : 11.52 103.63 310.89 0.00 0.00 13072.53 361.31 131873.31 00:21:35.067 =================================================================================================================== 00:21:35.067 Total : 103.63 310.89 0.00 0.00 13072.53 361.31 131873.31 00:21:35.067 [2024-07-10 13:45:14.277991] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.067 [2024-07-10 13:45:14.278079] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.067 [2024-07-10 13:45:14.278177] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.067 [2024-07-10 13:45:14.278208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:35.067 0 00:21:35.067 13:45:14 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.067 13:45:14 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:35.327 13:45:14 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:35.327 13:45:14 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:35.327 13:45:14 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@12 -- # local i 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.327 13:45:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:35.586 /dev/nbd0 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:35.586 13:45:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:35.586 13:45:14 -- common/autotest_common.sh@857 -- # local i 00:21:35.586 13:45:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:35.586 13:45:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:35.586 13:45:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:35.586 13:45:14 -- common/autotest_common.sh@861 -- # break 00:21:35.586 13:45:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:35.586 13:45:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:35.586 13:45:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:35.586 1+0 records in 00:21:35.586 1+0 records out 00:21:35.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261064 s, 15.7 MB/s 00:21:35.586 13:45:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.586 13:45:14 -- common/autotest_common.sh@874 -- # size=4096 00:21:35.586 13:45:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.586 13:45:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:35.586 13:45:14 -- common/autotest_common.sh@877 -- # return 0 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.586 13:45:14 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:35.586 13:45:14 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:35.586 13:45:14 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@12 -- # local i 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.586 13:45:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:35.844 /dev/nbd1 00:21:35.844 13:45:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:35.844 13:45:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:35.844 13:45:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:35.844 13:45:14 -- common/autotest_common.sh@857 -- # local i 00:21:35.845 13:45:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:35.845 13:45:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:35.845 13:45:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:35.845 13:45:14 -- common/autotest_common.sh@861 -- # break 00:21:35.845 13:45:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:35.845 13:45:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:35.845 13:45:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:35.845 1+0 records in 00:21:35.845 1+0 records out 00:21:35.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027662 s, 14.8 MB/s 00:21:35.845 13:45:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.845 13:45:14 -- common/autotest_common.sh@874 -- # size=4096 00:21:35.845 13:45:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.845 13:45:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:35.845 13:45:14 -- common/autotest_common.sh@877 -- # return 0 00:21:35.845 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:35.845 13:45:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.845 13:45:14 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:35.845 13:45:15 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:35.845 13:45:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.845 13:45:15 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:35.845 13:45:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:35.845 13:45:15 -- bdev/nbd_common.sh@51 -- # local i 00:21:35.845 13:45:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:35.845 13:45:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:36.103 13:45:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@41 -- # break 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@45 -- # return 0 00:21:36.362 13:45:15 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@51 -- # local i 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:36.362 13:45:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:36.621 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:36.621 13:45:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:36.621 13:45:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:36.621 13:45:15 -- bdev/nbd_common.sh@41 -- # break 00:21:36.621 13:45:15 -- bdev/nbd_common.sh@45 -- # return 0 00:21:36.621 13:45:15 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:36.621 13:45:15 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:36.621 13:45:15 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:36.621 13:45:15 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:36.621 13:45:15 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:36.880 [2024-07-10 13:45:16.152355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:36.880 [2024-07-10 13:45:16.152463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.880 [2024-07-10 13:45:16.152494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:36.880 [2024-07-10 13:45:16.152517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.880 [2024-07-10 13:45:16.154652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.880 [2024-07-10 13:45:16.154732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:36.880 [2024-07-10 13:45:16.154846] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:36.880 [2024-07-10 13:45:16.154913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.880 BaseBdev1 00:21:36.880 13:45:16 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:36.880 13:45:16 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:36.880 13:45:16 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:37.140 13:45:16 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:37.398 [2024-07-10 13:45:16.523760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:37.398 [2024-07-10 13:45:16.523862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.398 [2024-07-10 13:45:16.523891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:37.398 [2024-07-10 13:45:16.523916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.398 [2024-07-10 13:45:16.524376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.398 [2024-07-10 13:45:16.524428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:37.398 [2024-07-10 13:45:16.524535] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:37.398 [2024-07-10 13:45:16.524551] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:37.398 [2024-07-10 13:45:16.524557] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.398 [2024-07-10 13:45:16.524575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:37.398 [2024-07-10 13:45:16.524647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.398 BaseBdev2 00:21:37.398 13:45:16 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:37.398 13:45:16 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:37.761 [2024-07-10 13:45:16.919210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:37.761 [2024-07-10 13:45:16.919302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.761 [2024-07-10 13:45:16.919340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:37.761 [2024-07-10 13:45:16.919356] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.761 [2024-07-10 13:45:16.919873] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.761 [2024-07-10 13:45:16.919918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:37.761 [2024-07-10 13:45:16.920046] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:37.761 [2024-07-10 13:45:16.920103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:37.761 spare 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.761 13:45:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.761 [2024-07-10 13:45:17.020030] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:37.761 [2024-07-10 13:45:17.020070] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:37.761 [2024-07-10 13:45:17.020239] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cee0 00:21:37.761 [2024-07-10 13:45:17.020636] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:37.761 [2024-07-10 13:45:17.020658] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:37.761 [2024-07-10 13:45:17.020816] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.055 13:45:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.055 "name": "raid_bdev1", 00:21:38.055 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:38.055 "strip_size_kb": 0, 00:21:38.055 "state": "online", 00:21:38.055 "raid_level": "raid1", 00:21:38.055 "superblock": true, 00:21:38.055 "num_base_bdevs": 2, 00:21:38.055 "num_base_bdevs_discovered": 2, 00:21:38.055 "num_base_bdevs_operational": 2, 00:21:38.055 "base_bdevs_list": [ 00:21:38.055 { 00:21:38.055 "name": "spare", 00:21:38.055 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:38.055 "is_configured": true, 00:21:38.055 "data_offset": 2048, 00:21:38.055 "data_size": 63488 00:21:38.055 }, 00:21:38.055 { 00:21:38.055 "name": "BaseBdev2", 00:21:38.055 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:38.055 "is_configured": true, 00:21:38.055 "data_offset": 2048, 00:21:38.055 "data_size": 63488 00:21:38.055 } 00:21:38.055 ] 00:21:38.055 }' 00:21:38.055 13:45:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.055 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.623 13:45:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:38.882 "name": "raid_bdev1", 00:21:38.882 "uuid": "47e9bedf-7cfd-4272-bf68-9d5620650d01", 00:21:38.882 "strip_size_kb": 0, 00:21:38.882 "state": "online", 00:21:38.882 "raid_level": "raid1", 00:21:38.882 "superblock": true, 00:21:38.882 "num_base_bdevs": 2, 00:21:38.882 "num_base_bdevs_discovered": 2, 00:21:38.882 "num_base_bdevs_operational": 2, 00:21:38.882 "base_bdevs_list": [ 00:21:38.882 { 00:21:38.882 "name": "spare", 00:21:38.882 "uuid": "f9d96153-5a11-5144-96c7-63f48116ea5b", 00:21:38.882 "is_configured": true, 00:21:38.882 "data_offset": 2048, 00:21:38.882 "data_size": 63488 00:21:38.882 }, 00:21:38.882 { 00:21:38.882 "name": "BaseBdev2", 00:21:38.882 "uuid": "b7c9a47f-f795-5698-be1f-0b28f3a59489", 00:21:38.882 "is_configured": true, 00:21:38.882 "data_offset": 2048, 00:21:38.882 "data_size": 63488 00:21:38.882 } 00:21:38.882 ] 00:21:38.882 }' 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.882 13:45:18 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:39.141 13:45:18 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.141 13:45:18 -- bdev/bdev_raid.sh@709 -- # killprocess 127377 00:21:39.141 13:45:18 -- common/autotest_common.sh@926 -- # '[' -z 127377 ']' 00:21:39.141 13:45:18 -- common/autotest_common.sh@930 -- # kill -0 127377 00:21:39.141 13:45:18 -- common/autotest_common.sh@931 -- # uname 00:21:39.141 13:45:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:39.141 13:45:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127377 00:21:39.141 13:45:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:39.141 13:45:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:39.141 13:45:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127377' 00:21:39.141 killing process with pid 127377 00:21:39.141 13:45:18 -- common/autotest_common.sh@945 -- # kill 127377 00:21:39.141 Received shutdown signal, test time was about 15.675585 seconds 00:21:39.141 00:21:39.141 Latency(us) 00:21:39.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.141 =================================================================================================================== 00:21:39.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.142 [2024-07-10 13:45:18.400613] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:39.142 [2024-07-10 13:45:18.400716] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.142 13:45:18 -- common/autotest_common.sh@950 -- # wait 127377 00:21:39.142 [2024-07-10 13:45:18.400796] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.142 [2024-07-10 13:45:18.400807] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:39.400 [2024-07-10 13:45:18.640344] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:40.772 13:45:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:40.772 00:21:40.772 real 0m20.993s 00:21:40.772 user 0m32.882s 00:21:40.772 sys 0m1.956s 00:21:40.772 13:45:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.772 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:21:40.772 ************************************ 00:21:40.772 END TEST raid_rebuild_test_sb_io 00:21:40.772 ************************************ 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:41.029 13:45:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:41.029 13:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:41.029 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:21:41.029 ************************************ 00:21:41.029 START TEST raid_rebuild_test 00:21:41.029 ************************************ 00:21:41.029 13:45:20 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:41.029 13:45:20 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:41.030 13:45:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=127971 00:21:41.030 13:45:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127971 /var/tmp/spdk-raid.sock 00:21:41.030 13:45:20 -- common/autotest_common.sh@819 -- # '[' -z 127971 ']' 00:21:41.030 13:45:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:41.030 13:45:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.030 13:45:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:41.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:41.030 13:45:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:41.030 13:45:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.030 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:21:41.030 [2024-07-10 13:45:20.214314] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:41.030 [2024-07-10 13:45:20.214461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127971 ] 00:21:41.030 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:41.030 Zero copy mechanism will not be used. 00:21:41.030 [2024-07-10 13:45:20.371867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.288 [2024-07-10 13:45:20.587754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.545 [2024-07-10 13:45:20.808224] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.804 13:45:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:41.804 13:45:21 -- common/autotest_common.sh@852 -- # return 0 00:21:41.804 13:45:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:41.804 13:45:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:41.804 13:45:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.094 BaseBdev1 00:21:42.094 13:45:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:42.094 13:45:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:42.094 13:45:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:42.352 BaseBdev2 00:21:42.352 13:45:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:42.352 13:45:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:42.352 13:45:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:42.610 BaseBdev3 00:21:42.610 13:45:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:42.610 13:45:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:42.610 13:45:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:42.868 BaseBdev4 00:21:42.868 13:45:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:43.127 spare_malloc 00:21:43.127 13:45:22 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:43.127 spare_delay 00:21:43.127 13:45:22 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:43.385 [2024-07-10 13:45:22.667813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:43.385 [2024-07-10 13:45:22.667906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.385 [2024-07-10 13:45:22.667936] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:43.385 [2024-07-10 13:45:22.667973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.385 [2024-07-10 13:45:22.670129] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.385 [2024-07-10 13:45:22.670176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:43.385 spare 00:21:43.385 13:45:22 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:43.644 [2024-07-10 13:45:22.867500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.644 [2024-07-10 13:45:22.869403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.644 [2024-07-10 13:45:22.869452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:43.644 [2024-07-10 13:45:22.869483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:43.644 [2024-07-10 13:45:22.869552] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:43.644 [2024-07-10 13:45:22.869560] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:43.644 [2024-07-10 13:45:22.869740] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:43.644 [2024-07-10 13:45:22.870074] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:43.644 [2024-07-10 13:45:22.870094] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:43.644 [2024-07-10 13:45:22.870263] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.644 13:45:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.903 13:45:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.903 "name": "raid_bdev1", 00:21:43.903 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:43.903 "strip_size_kb": 0, 00:21:43.903 "state": "online", 00:21:43.903 "raid_level": "raid1", 00:21:43.903 "superblock": false, 00:21:43.903 "num_base_bdevs": 4, 00:21:43.903 "num_base_bdevs_discovered": 4, 00:21:43.903 "num_base_bdevs_operational": 4, 00:21:43.903 "base_bdevs_list": [ 00:21:43.903 { 00:21:43.903 "name": "BaseBdev1", 00:21:43.903 "uuid": "27f877ac-63b0-4681-8aee-eed9c3f6034b", 00:21:43.903 "is_configured": true, 00:21:43.903 "data_offset": 0, 00:21:43.903 "data_size": 65536 00:21:43.903 }, 00:21:43.903 { 00:21:43.903 "name": "BaseBdev2", 00:21:43.903 "uuid": "63a0dcdb-5f02-4f06-8e4e-e9cc208b27c7", 00:21:43.903 "is_configured": true, 00:21:43.903 "data_offset": 0, 00:21:43.903 "data_size": 65536 00:21:43.903 }, 00:21:43.903 { 00:21:43.903 "name": "BaseBdev3", 00:21:43.903 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:43.903 "is_configured": true, 00:21:43.903 "data_offset": 0, 00:21:43.903 "data_size": 65536 00:21:43.903 }, 00:21:43.903 { 00:21:43.903 "name": "BaseBdev4", 00:21:43.903 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:43.903 "is_configured": true, 00:21:43.903 "data_offset": 0, 00:21:43.903 "data_size": 65536 00:21:43.903 } 00:21:43.903 ] 00:21:43.903 }' 00:21:43.903 13:45:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.903 13:45:23 -- common/autotest_common.sh@10 -- # set +x 00:21:44.469 13:45:23 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:44.469 13:45:23 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:44.728 [2024-07-10 13:45:23.885934] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.728 13:45:23 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:44.728 13:45:23 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.728 13:45:23 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:44.985 13:45:24 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:44.986 13:45:24 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:44.986 13:45:24 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:44.986 13:45:24 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@12 -- # local i 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:44.986 [2024-07-10 13:45:24.277098] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:44.986 /dev/nbd0 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:44.986 13:45:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:44.986 13:45:24 -- common/autotest_common.sh@857 -- # local i 00:21:44.986 13:45:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:44.986 13:45:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:44.986 13:45:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:44.986 13:45:24 -- common/autotest_common.sh@861 -- # break 00:21:44.986 13:45:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:44.986 13:45:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:44.986 13:45:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:44.986 1+0 records in 00:21:44.986 1+0 records out 00:21:44.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002024 s, 20.2 MB/s 00:21:44.986 13:45:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.986 13:45:24 -- common/autotest_common.sh@874 -- # size=4096 00:21:44.986 13:45:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.986 13:45:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:44.986 13:45:24 -- common/autotest_common.sh@877 -- # return 0 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:44.986 13:45:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:44.986 13:45:24 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:44.986 13:45:24 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:44.986 13:45:24 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:50.257 65536+0 records in 00:21:50.257 65536+0 records out 00:21:50.257 33554432 bytes (34 MB, 32 MiB) copied, 4.84173 s, 6.9 MB/s 00:21:50.257 13:45:29 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:50.257 [2024-07-10 13:45:29.408070] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@41 -- # break 00:21:50.257 13:45:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.257 13:45:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:50.517 [2024-07-10 13:45:29.707290] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.517 13:45:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.777 13:45:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.777 "name": "raid_bdev1", 00:21:50.777 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:50.777 "strip_size_kb": 0, 00:21:50.777 "state": "online", 00:21:50.777 "raid_level": "raid1", 00:21:50.777 "superblock": false, 00:21:50.777 "num_base_bdevs": 4, 00:21:50.777 "num_base_bdevs_discovered": 3, 00:21:50.777 "num_base_bdevs_operational": 3, 00:21:50.777 "base_bdevs_list": [ 00:21:50.777 { 00:21:50.777 "name": null, 00:21:50.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.777 "is_configured": false, 00:21:50.777 "data_offset": 0, 00:21:50.777 "data_size": 65536 00:21:50.777 }, 00:21:50.777 { 00:21:50.777 "name": "BaseBdev2", 00:21:50.777 "uuid": "63a0dcdb-5f02-4f06-8e4e-e9cc208b27c7", 00:21:50.777 "is_configured": true, 00:21:50.777 "data_offset": 0, 00:21:50.777 "data_size": 65536 00:21:50.777 }, 00:21:50.777 { 00:21:50.777 "name": "BaseBdev3", 00:21:50.777 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:50.777 "is_configured": true, 00:21:50.777 "data_offset": 0, 00:21:50.777 "data_size": 65536 00:21:50.777 }, 00:21:50.777 { 00:21:50.777 "name": "BaseBdev4", 00:21:50.777 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:50.777 "is_configured": true, 00:21:50.777 "data_offset": 0, 00:21:50.777 "data_size": 65536 00:21:50.777 } 00:21:50.777 ] 00:21:50.777 }' 00:21:50.777 13:45:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.777 13:45:29 -- common/autotest_common.sh@10 -- # set +x 00:21:51.346 13:45:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.606 [2024-07-10 13:45:30.785454] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:51.606 [2024-07-10 13:45:30.785499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.606 [2024-07-10 13:45:30.799074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0ab40 00:21:51.606 [2024-07-10 13:45:30.800993] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.606 13:45:30 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.572 13:45:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.847 13:45:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.847 "name": "raid_bdev1", 00:21:52.847 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:52.847 "strip_size_kb": 0, 00:21:52.847 "state": "online", 00:21:52.847 "raid_level": "raid1", 00:21:52.847 "superblock": false, 00:21:52.847 "num_base_bdevs": 4, 00:21:52.847 "num_base_bdevs_discovered": 4, 00:21:52.847 "num_base_bdevs_operational": 4, 00:21:52.847 "process": { 00:21:52.847 "type": "rebuild", 00:21:52.847 "target": "spare", 00:21:52.847 "progress": { 00:21:52.847 "blocks": 24576, 00:21:52.847 "percent": 37 00:21:52.847 } 00:21:52.847 }, 00:21:52.847 "base_bdevs_list": [ 00:21:52.847 { 00:21:52.847 "name": "spare", 00:21:52.847 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:52.847 "is_configured": true, 00:21:52.847 "data_offset": 0, 00:21:52.847 "data_size": 65536 00:21:52.847 }, 00:21:52.847 { 00:21:52.847 "name": "BaseBdev2", 00:21:52.847 "uuid": "63a0dcdb-5f02-4f06-8e4e-e9cc208b27c7", 00:21:52.847 "is_configured": true, 00:21:52.847 "data_offset": 0, 00:21:52.847 "data_size": 65536 00:21:52.847 }, 00:21:52.847 { 00:21:52.847 "name": "BaseBdev3", 00:21:52.847 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:52.847 "is_configured": true, 00:21:52.847 "data_offset": 0, 00:21:52.847 "data_size": 65536 00:21:52.847 }, 00:21:52.847 { 00:21:52.847 "name": "BaseBdev4", 00:21:52.847 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:52.847 "is_configured": true, 00:21:52.847 "data_offset": 0, 00:21:52.847 "data_size": 65536 00:21:52.847 } 00:21:52.847 ] 00:21:52.847 }' 00:21:52.847 13:45:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.847 13:45:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.847 13:45:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.847 13:45:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.847 13:45:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:53.106 [2024-07-10 13:45:32.312718] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.106 [2024-07-10 13:45:32.408536] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:53.106 [2024-07-10 13:45:32.408649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.106 13:45:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.366 13:45:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.366 "name": "raid_bdev1", 00:21:53.366 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:53.366 "strip_size_kb": 0, 00:21:53.366 "state": "online", 00:21:53.366 "raid_level": "raid1", 00:21:53.366 "superblock": false, 00:21:53.366 "num_base_bdevs": 4, 00:21:53.366 "num_base_bdevs_discovered": 3, 00:21:53.366 "num_base_bdevs_operational": 3, 00:21:53.366 "base_bdevs_list": [ 00:21:53.366 { 00:21:53.366 "name": null, 00:21:53.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.366 "is_configured": false, 00:21:53.366 "data_offset": 0, 00:21:53.366 "data_size": 65536 00:21:53.366 }, 00:21:53.366 { 00:21:53.366 "name": "BaseBdev2", 00:21:53.366 "uuid": "63a0dcdb-5f02-4f06-8e4e-e9cc208b27c7", 00:21:53.366 "is_configured": true, 00:21:53.366 "data_offset": 0, 00:21:53.366 "data_size": 65536 00:21:53.366 }, 00:21:53.366 { 00:21:53.366 "name": "BaseBdev3", 00:21:53.366 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:53.366 "is_configured": true, 00:21:53.366 "data_offset": 0, 00:21:53.366 "data_size": 65536 00:21:53.366 }, 00:21:53.366 { 00:21:53.366 "name": "BaseBdev4", 00:21:53.366 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:53.366 "is_configured": true, 00:21:53.366 "data_offset": 0, 00:21:53.366 "data_size": 65536 00:21:53.366 } 00:21:53.366 ] 00:21:53.366 }' 00:21:53.366 13:45:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.366 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.935 13:45:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.194 13:45:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.194 "name": "raid_bdev1", 00:21:54.194 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:54.194 "strip_size_kb": 0, 00:21:54.194 "state": "online", 00:21:54.194 "raid_level": "raid1", 00:21:54.194 "superblock": false, 00:21:54.194 "num_base_bdevs": 4, 00:21:54.194 "num_base_bdevs_discovered": 3, 00:21:54.194 "num_base_bdevs_operational": 3, 00:21:54.194 "base_bdevs_list": [ 00:21:54.194 { 00:21:54.194 "name": null, 00:21:54.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.194 "is_configured": false, 00:21:54.194 "data_offset": 0, 00:21:54.194 "data_size": 65536 00:21:54.194 }, 00:21:54.194 { 00:21:54.194 "name": "BaseBdev2", 00:21:54.194 "uuid": "63a0dcdb-5f02-4f06-8e4e-e9cc208b27c7", 00:21:54.194 "is_configured": true, 00:21:54.194 "data_offset": 0, 00:21:54.194 "data_size": 65536 00:21:54.194 }, 00:21:54.194 { 00:21:54.194 "name": "BaseBdev3", 00:21:54.194 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:54.194 "is_configured": true, 00:21:54.194 "data_offset": 0, 00:21:54.194 "data_size": 65536 00:21:54.194 }, 00:21:54.194 { 00:21:54.195 "name": "BaseBdev4", 00:21:54.195 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:54.195 "is_configured": true, 00:21:54.195 "data_offset": 0, 00:21:54.195 "data_size": 65536 00:21:54.195 } 00:21:54.195 ] 00:21:54.195 }' 00:21:54.195 13:45:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.195 13:45:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.195 13:45:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.195 13:45:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:54.195 13:45:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:54.454 [2024-07-10 13:45:33.692313] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:54.454 [2024-07-10 13:45:33.692357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.454 [2024-07-10 13:45:33.705377] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0ace0 00:21:54.454 [2024-07-10 13:45:33.706983] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.454 13:45:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.390 13:45:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.650 13:45:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:55.650 "name": "raid_bdev1", 00:21:55.650 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:55.650 "strip_size_kb": 0, 00:21:55.650 "state": "online", 00:21:55.650 "raid_level": "raid1", 00:21:55.650 "superblock": false, 00:21:55.650 "num_base_bdevs": 4, 00:21:55.650 "num_base_bdevs_discovered": 4, 00:21:55.650 "num_base_bdevs_operational": 4, 00:21:55.650 "process": { 00:21:55.650 "type": "rebuild", 00:21:55.650 "target": "spare", 00:21:55.650 "progress": { 00:21:55.650 "blocks": 24576, 00:21:55.650 "percent": 37 00:21:55.650 } 00:21:55.650 }, 00:21:55.650 "base_bdevs_list": [ 00:21:55.650 { 00:21:55.650 "name": "spare", 00:21:55.650 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:55.650 "is_configured": true, 00:21:55.650 "data_offset": 0, 00:21:55.650 "data_size": 65536 00:21:55.650 }, 00:21:55.650 { 00:21:55.650 "name": "BaseBdev2", 00:21:55.650 "uuid": "63a0dcdb-5f02-4f06-8e4e-e9cc208b27c7", 00:21:55.650 "is_configured": true, 00:21:55.650 "data_offset": 0, 00:21:55.650 "data_size": 65536 00:21:55.650 }, 00:21:55.650 { 00:21:55.650 "name": "BaseBdev3", 00:21:55.650 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:55.650 "is_configured": true, 00:21:55.650 "data_offset": 0, 00:21:55.650 "data_size": 65536 00:21:55.650 }, 00:21:55.650 { 00:21:55.650 "name": "BaseBdev4", 00:21:55.650 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:55.650 "is_configured": true, 00:21:55.650 "data_offset": 0, 00:21:55.650 "data_size": 65536 00:21:55.650 } 00:21:55.650 ] 00:21:55.650 }' 00:21:55.650 13:45:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:55.910 13:45:35 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:55.910 [2024-07-10 13:45:35.250929] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:56.169 [2024-07-10 13:45:35.314292] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0ace0 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.169 13:45:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:56.428 "name": "raid_bdev1", 00:21:56.428 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:56.428 "strip_size_kb": 0, 00:21:56.428 "state": "online", 00:21:56.428 "raid_level": "raid1", 00:21:56.428 "superblock": false, 00:21:56.428 "num_base_bdevs": 4, 00:21:56.428 "num_base_bdevs_discovered": 3, 00:21:56.428 "num_base_bdevs_operational": 3, 00:21:56.428 "process": { 00:21:56.428 "type": "rebuild", 00:21:56.428 "target": "spare", 00:21:56.428 "progress": { 00:21:56.428 "blocks": 34816, 00:21:56.428 "percent": 53 00:21:56.428 } 00:21:56.428 }, 00:21:56.428 "base_bdevs_list": [ 00:21:56.428 { 00:21:56.428 "name": "spare", 00:21:56.428 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:56.428 "is_configured": true, 00:21:56.428 "data_offset": 0, 00:21:56.428 "data_size": 65536 00:21:56.428 }, 00:21:56.428 { 00:21:56.428 "name": null, 00:21:56.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.428 "is_configured": false, 00:21:56.428 "data_offset": 0, 00:21:56.428 "data_size": 65536 00:21:56.428 }, 00:21:56.428 { 00:21:56.428 "name": "BaseBdev3", 00:21:56.428 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:56.428 "is_configured": true, 00:21:56.428 "data_offset": 0, 00:21:56.428 "data_size": 65536 00:21:56.428 }, 00:21:56.428 { 00:21:56.428 "name": "BaseBdev4", 00:21:56.428 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:56.428 "is_configured": true, 00:21:56.428 "data_offset": 0, 00:21:56.428 "data_size": 65536 00:21:56.428 } 00:21:56.428 ] 00:21:56.428 }' 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@657 -- # local timeout=450 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.428 13:45:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.687 13:45:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:56.687 "name": "raid_bdev1", 00:21:56.687 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:56.687 "strip_size_kb": 0, 00:21:56.687 "state": "online", 00:21:56.687 "raid_level": "raid1", 00:21:56.687 "superblock": false, 00:21:56.687 "num_base_bdevs": 4, 00:21:56.687 "num_base_bdevs_discovered": 3, 00:21:56.687 "num_base_bdevs_operational": 3, 00:21:56.687 "process": { 00:21:56.687 "type": "rebuild", 00:21:56.687 "target": "spare", 00:21:56.687 "progress": { 00:21:56.687 "blocks": 40960, 00:21:56.687 "percent": 62 00:21:56.687 } 00:21:56.687 }, 00:21:56.687 "base_bdevs_list": [ 00:21:56.687 { 00:21:56.687 "name": "spare", 00:21:56.687 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:56.687 "is_configured": true, 00:21:56.687 "data_offset": 0, 00:21:56.687 "data_size": 65536 00:21:56.687 }, 00:21:56.687 { 00:21:56.687 "name": null, 00:21:56.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.687 "is_configured": false, 00:21:56.687 "data_offset": 0, 00:21:56.687 "data_size": 65536 00:21:56.687 }, 00:21:56.687 { 00:21:56.687 "name": "BaseBdev3", 00:21:56.687 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:56.687 "is_configured": true, 00:21:56.687 "data_offset": 0, 00:21:56.687 "data_size": 65536 00:21:56.687 }, 00:21:56.687 { 00:21:56.687 "name": "BaseBdev4", 00:21:56.687 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:56.687 "is_configured": true, 00:21:56.687 "data_offset": 0, 00:21:56.687 "data_size": 65536 00:21:56.687 } 00:21:56.687 ] 00:21:56.687 }' 00:21:56.687 13:45:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:56.687 13:45:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.687 13:45:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:56.687 13:45:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.687 13:45:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:57.624 13:45:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:57.624 13:45:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.624 13:45:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.624 13:45:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.624 13:45:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.624 13:45:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.625 13:45:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.625 13:45:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.625 [2024-07-10 13:45:36.930801] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:57.625 [2024-07-10 13:45:36.930869] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:57.625 [2024-07-10 13:45:36.930947] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.884 "name": "raid_bdev1", 00:21:57.884 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:57.884 "strip_size_kb": 0, 00:21:57.884 "state": "online", 00:21:57.884 "raid_level": "raid1", 00:21:57.884 "superblock": false, 00:21:57.884 "num_base_bdevs": 4, 00:21:57.884 "num_base_bdevs_discovered": 3, 00:21:57.884 "num_base_bdevs_operational": 3, 00:21:57.884 "base_bdevs_list": [ 00:21:57.884 { 00:21:57.884 "name": "spare", 00:21:57.884 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:57.884 "is_configured": true, 00:21:57.884 "data_offset": 0, 00:21:57.884 "data_size": 65536 00:21:57.884 }, 00:21:57.884 { 00:21:57.884 "name": null, 00:21:57.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.884 "is_configured": false, 00:21:57.884 "data_offset": 0, 00:21:57.884 "data_size": 65536 00:21:57.884 }, 00:21:57.884 { 00:21:57.884 "name": "BaseBdev3", 00:21:57.884 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:57.884 "is_configured": true, 00:21:57.884 "data_offset": 0, 00:21:57.884 "data_size": 65536 00:21:57.884 }, 00:21:57.884 { 00:21:57.884 "name": "BaseBdev4", 00:21:57.884 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:57.884 "is_configured": true, 00:21:57.884 "data_offset": 0, 00:21:57.884 "data_size": 65536 00:21:57.884 } 00:21:57.884 ] 00:21:57.884 }' 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@660 -- # break 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.884 13:45:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.144 13:45:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:58.144 "name": "raid_bdev1", 00:21:58.144 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:58.144 "strip_size_kb": 0, 00:21:58.144 "state": "online", 00:21:58.144 "raid_level": "raid1", 00:21:58.144 "superblock": false, 00:21:58.144 "num_base_bdevs": 4, 00:21:58.144 "num_base_bdevs_discovered": 3, 00:21:58.144 "num_base_bdevs_operational": 3, 00:21:58.144 "base_bdevs_list": [ 00:21:58.144 { 00:21:58.144 "name": "spare", 00:21:58.144 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:58.144 "is_configured": true, 00:21:58.144 "data_offset": 0, 00:21:58.144 "data_size": 65536 00:21:58.144 }, 00:21:58.144 { 00:21:58.144 "name": null, 00:21:58.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.144 "is_configured": false, 00:21:58.144 "data_offset": 0, 00:21:58.144 "data_size": 65536 00:21:58.144 }, 00:21:58.144 { 00:21:58.144 "name": "BaseBdev3", 00:21:58.144 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:58.144 "is_configured": true, 00:21:58.144 "data_offset": 0, 00:21:58.144 "data_size": 65536 00:21:58.144 }, 00:21:58.144 { 00:21:58.144 "name": "BaseBdev4", 00:21:58.144 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:58.144 "is_configured": true, 00:21:58.144 "data_offset": 0, 00:21:58.144 "data_size": 65536 00:21:58.144 } 00:21:58.144 ] 00:21:58.144 }' 00:21:58.144 13:45:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:58.144 13:45:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:58.144 13:45:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.403 13:45:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.403 "name": "raid_bdev1", 00:21:58.403 "uuid": "2ac84577-0818-4f18-93ab-89da1bd8dd63", 00:21:58.403 "strip_size_kb": 0, 00:21:58.403 "state": "online", 00:21:58.403 "raid_level": "raid1", 00:21:58.403 "superblock": false, 00:21:58.403 "num_base_bdevs": 4, 00:21:58.403 "num_base_bdevs_discovered": 3, 00:21:58.403 "num_base_bdevs_operational": 3, 00:21:58.403 "base_bdevs_list": [ 00:21:58.403 { 00:21:58.403 "name": "spare", 00:21:58.403 "uuid": "e9ed9f55-3845-5ea1-8c96-a912fd0158f4", 00:21:58.403 "is_configured": true, 00:21:58.403 "data_offset": 0, 00:21:58.403 "data_size": 65536 00:21:58.403 }, 00:21:58.403 { 00:21:58.403 "name": null, 00:21:58.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.403 "is_configured": false, 00:21:58.403 "data_offset": 0, 00:21:58.403 "data_size": 65536 00:21:58.403 }, 00:21:58.403 { 00:21:58.403 "name": "BaseBdev3", 00:21:58.403 "uuid": "6bffbe90-e61b-4f7a-bf41-65ddfc029646", 00:21:58.403 "is_configured": true, 00:21:58.403 "data_offset": 0, 00:21:58.403 "data_size": 65536 00:21:58.403 }, 00:21:58.403 { 00:21:58.404 "name": "BaseBdev4", 00:21:58.404 "uuid": "a2bfb1b5-ca26-4811-a597-404e9daa9339", 00:21:58.404 "is_configured": true, 00:21:58.404 "data_offset": 0, 00:21:58.404 "data_size": 65536 00:21:58.404 } 00:21:58.404 ] 00:21:58.404 }' 00:21:58.404 13:45:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.404 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:21:59.343 13:45:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:59.343 [2024-07-10 13:45:38.572425] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.343 [2024-07-10 13:45:38.572465] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.343 [2024-07-10 13:45:38.572551] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.343 [2024-07-10 13:45:38.572618] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.343 [2024-07-10 13:45:38.572626] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:59.343 13:45:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:59.343 13:45:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.603 13:45:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:59.603 13:45:38 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:59.603 13:45:38 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@12 -- # local i 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:59.603 13:45:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:59.863 /dev/nbd0 00:21:59.863 13:45:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:59.863 13:45:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:59.863 13:45:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:59.863 13:45:38 -- common/autotest_common.sh@857 -- # local i 00:21:59.863 13:45:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:59.863 13:45:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:59.863 13:45:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:59.863 13:45:38 -- common/autotest_common.sh@861 -- # break 00:21:59.863 13:45:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:59.863 13:45:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:59.863 13:45:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.863 1+0 records in 00:21:59.863 1+0 records out 00:21:59.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202731 s, 20.2 MB/s 00:21:59.863 13:45:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.863 13:45:39 -- common/autotest_common.sh@874 -- # size=4096 00:21:59.863 13:45:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.863 13:45:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:59.863 13:45:39 -- common/autotest_common.sh@877 -- # return 0 00:21:59.863 13:45:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:59.863 13:45:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:59.863 13:45:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:00.125 /dev/nbd1 00:22:00.125 13:45:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:00.125 13:45:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:00.125 13:45:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:00.125 13:45:39 -- common/autotest_common.sh@857 -- # local i 00:22:00.125 13:45:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:00.125 13:45:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:00.125 13:45:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:00.125 13:45:39 -- common/autotest_common.sh@861 -- # break 00:22:00.125 13:45:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:00.125 13:45:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:00.125 13:45:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.125 1+0 records in 00:22:00.125 1+0 records out 00:22:00.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487404 s, 8.4 MB/s 00:22:00.125 13:45:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.125 13:45:39 -- common/autotest_common.sh@874 -- # size=4096 00:22:00.125 13:45:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.125 13:45:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:00.125 13:45:39 -- common/autotest_common.sh@877 -- # return 0 00:22:00.125 13:45:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.125 13:45:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:00.125 13:45:39 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:00.385 13:45:39 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@51 -- # local i 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@41 -- # break 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@45 -- # return 0 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.385 13:45:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:00.645 13:45:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:00.904 13:45:40 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:00.904 13:45:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.904 13:45:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:00.904 13:45:40 -- bdev/nbd_common.sh@41 -- # break 00:22:00.904 13:45:40 -- bdev/nbd_common.sh@45 -- # return 0 00:22:00.904 13:45:40 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:00.904 13:45:40 -- bdev/bdev_raid.sh@709 -- # killprocess 127971 00:22:00.904 13:45:40 -- common/autotest_common.sh@926 -- # '[' -z 127971 ']' 00:22:00.904 13:45:40 -- common/autotest_common.sh@930 -- # kill -0 127971 00:22:00.904 13:45:40 -- common/autotest_common.sh@931 -- # uname 00:22:00.904 13:45:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.904 13:45:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127971 00:22:00.904 13:45:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:00.904 13:45:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:00.904 13:45:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127971' 00:22:00.904 killing process with pid 127971 00:22:00.904 13:45:40 -- common/autotest_common.sh@945 -- # kill 127971 00:22:00.904 Received shutdown signal, test time was about 60.000000 seconds 00:22:00.904 00:22:00.904 Latency(us) 00:22:00.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.904 =================================================================================================================== 00:22:00.904 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.904 [2024-07-10 13:45:40.051358] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.904 13:45:40 -- common/autotest_common.sh@950 -- # wait 127971 00:22:01.164 [2024-07-10 13:45:40.505174] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:02.540 ************************************ 00:22:02.540 END TEST raid_rebuild_test 00:22:02.540 ************************************ 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:02.540 00:22:02.540 real 0m21.646s 00:22:02.540 user 0m29.702s 00:22:02.540 sys 0m3.581s 00:22:02.540 13:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.540 13:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:22:02.540 13:45:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:02.540 13:45:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:02.540 13:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:02.540 ************************************ 00:22:02.540 START TEST raid_rebuild_test_sb 00:22:02.540 ************************************ 00:22:02.540 13:45:41 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=128573 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128573 /var/tmp/spdk-raid.sock 00:22:02.540 13:45:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:02.540 13:45:41 -- common/autotest_common.sh@819 -- # '[' -z 128573 ']' 00:22:02.540 13:45:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:02.540 13:45:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:02.540 13:45:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:02.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:02.540 13:45:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:02.540 13:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:02.799 [2024-07-10 13:45:41.932440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:02.799 [2024-07-10 13:45:41.932589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128573 ] 00:22:02.799 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:02.799 Zero copy mechanism will not be used. 00:22:02.799 [2024-07-10 13:45:42.072434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.059 [2024-07-10 13:45:42.274164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.318 [2024-07-10 13:45:42.474189] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:03.577 13:45:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:03.577 13:45:42 -- common/autotest_common.sh@852 -- # return 0 00:22:03.577 13:45:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:03.577 13:45:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:03.577 13:45:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:03.835 BaseBdev1_malloc 00:22:03.835 13:45:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:03.835 [2024-07-10 13:45:43.192039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:03.835 [2024-07-10 13:45:43.192158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.835 [2024-07-10 13:45:43.192189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:03.835 [2024-07-10 13:45:43.192226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.093 [2024-07-10 13:45:43.194402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.093 [2024-07-10 13:45:43.194469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:04.093 BaseBdev1 00:22:04.093 13:45:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:04.093 13:45:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:04.093 13:45:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:04.093 BaseBdev2_malloc 00:22:04.093 13:45:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:04.351 [2024-07-10 13:45:43.637650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:04.351 [2024-07-10 13:45:43.637736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.351 [2024-07-10 13:45:43.637771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:04.351 [2024-07-10 13:45:43.637811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.351 [2024-07-10 13:45:43.639759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.351 [2024-07-10 13:45:43.639804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:04.351 BaseBdev2 00:22:04.351 13:45:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:04.351 13:45:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:04.351 13:45:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:04.610 BaseBdev3_malloc 00:22:04.610 13:45:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:04.870 [2024-07-10 13:45:44.048561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:04.870 [2024-07-10 13:45:44.048649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.870 [2024-07-10 13:45:44.048690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:04.870 [2024-07-10 13:45:44.048727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.870 [2024-07-10 13:45:44.050672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.870 [2024-07-10 13:45:44.050719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:04.870 BaseBdev3 00:22:04.870 13:45:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:04.870 13:45:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:04.870 13:45:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:05.130 BaseBdev4_malloc 00:22:05.130 13:45:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:05.130 [2024-07-10 13:45:44.460516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:05.130 [2024-07-10 13:45:44.460629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.130 [2024-07-10 13:45:44.460680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:05.130 [2024-07-10 13:45:44.460714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.130 [2024-07-10 13:45:44.462646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.130 [2024-07-10 13:45:44.462701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:05.130 BaseBdev4 00:22:05.130 13:45:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:05.389 spare_malloc 00:22:05.389 13:45:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:05.648 spare_delay 00:22:05.649 13:45:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:05.908 [2024-07-10 13:45:45.086575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:05.908 [2024-07-10 13:45:45.086681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.908 [2024-07-10 13:45:45.086712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:05.908 [2024-07-10 13:45:45.086749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.908 [2024-07-10 13:45:45.088933] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.908 [2024-07-10 13:45:45.088996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:05.908 spare 00:22:05.908 13:45:45 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:06.166 [2024-07-10 13:45:45.282338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.166 [2024-07-10 13:45:45.284081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.166 [2024-07-10 13:45:45.284169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:06.166 [2024-07-10 13:45:45.284214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:06.166 [2024-07-10 13:45:45.284438] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:06.166 [2024-07-10 13:45:45.284455] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:06.166 [2024-07-10 13:45:45.284622] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:06.166 [2024-07-10 13:45:45.284950] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:06.166 [2024-07-10 13:45:45.284968] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:06.166 [2024-07-10 13:45:45.285124] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.166 "name": "raid_bdev1", 00:22:06.166 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:06.166 "strip_size_kb": 0, 00:22:06.166 "state": "online", 00:22:06.166 "raid_level": "raid1", 00:22:06.166 "superblock": true, 00:22:06.166 "num_base_bdevs": 4, 00:22:06.166 "num_base_bdevs_discovered": 4, 00:22:06.166 "num_base_bdevs_operational": 4, 00:22:06.166 "base_bdevs_list": [ 00:22:06.166 { 00:22:06.166 "name": "BaseBdev1", 00:22:06.166 "uuid": "38118f79-77c6-53d8-838d-001ba46a2fb2", 00:22:06.166 "is_configured": true, 00:22:06.166 "data_offset": 2048, 00:22:06.166 "data_size": 63488 00:22:06.166 }, 00:22:06.166 { 00:22:06.166 "name": "BaseBdev2", 00:22:06.166 "uuid": "ecafc9a4-7459-5358-b145-a38f290f34b4", 00:22:06.166 "is_configured": true, 00:22:06.166 "data_offset": 2048, 00:22:06.166 "data_size": 63488 00:22:06.166 }, 00:22:06.166 { 00:22:06.166 "name": "BaseBdev3", 00:22:06.166 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:06.166 "is_configured": true, 00:22:06.166 "data_offset": 2048, 00:22:06.166 "data_size": 63488 00:22:06.166 }, 00:22:06.166 { 00:22:06.166 "name": "BaseBdev4", 00:22:06.166 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:06.166 "is_configured": true, 00:22:06.166 "data_offset": 2048, 00:22:06.166 "data_size": 63488 00:22:06.166 } 00:22:06.166 ] 00:22:06.166 }' 00:22:06.166 13:45:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.166 13:45:45 -- common/autotest_common.sh@10 -- # set +x 00:22:07.105 13:45:46 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:07.105 13:45:46 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:07.105 [2024-07-10 13:45:46.296837] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:07.105 13:45:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:07.105 13:45:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:07.105 13:45:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.364 13:45:46 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:07.364 13:45:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:07.364 13:45:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:07.364 13:45:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:07.364 13:45:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.364 13:45:46 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:07.364 13:45:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:07.364 13:45:46 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:07.364 13:45:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:07.364 13:45:46 -- bdev/nbd_common.sh@12 -- # local i 00:22:07.365 13:45:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:07.365 13:45:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.365 13:45:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:07.624 [2024-07-10 13:45:46.739816] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:07.624 /dev/nbd0 00:22:07.624 13:45:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:07.624 13:45:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:07.624 13:45:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:07.624 13:45:46 -- common/autotest_common.sh@857 -- # local i 00:22:07.624 13:45:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:07.624 13:45:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:07.624 13:45:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:07.624 13:45:46 -- common/autotest_common.sh@861 -- # break 00:22:07.624 13:45:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:07.624 13:45:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:07.624 13:45:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.624 1+0 records in 00:22:07.624 1+0 records out 00:22:07.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312436 s, 13.1 MB/s 00:22:07.624 13:45:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.624 13:45:46 -- common/autotest_common.sh@874 -- # size=4096 00:22:07.624 13:45:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.624 13:45:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:07.624 13:45:46 -- common/autotest_common.sh@877 -- # return 0 00:22:07.624 13:45:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.624 13:45:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.624 13:45:46 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:07.624 13:45:46 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:07.624 13:45:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:14.236 63488+0 records in 00:22:14.236 63488+0 records out 00:22:14.236 32505856 bytes (33 MB, 31 MiB) copied, 5.60346 s, 5.8 MB/s 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@51 -- # local i 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:14.236 [2024-07-10 13:45:52.636244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@41 -- # break 00:22:14.236 13:45:52 -- bdev/nbd_common.sh@45 -- # return 0 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:14.236 [2024-07-10 13:45:52.843543] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.236 13:45:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.236 13:45:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.236 "name": "raid_bdev1", 00:22:14.236 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:14.236 "strip_size_kb": 0, 00:22:14.236 "state": "online", 00:22:14.236 "raid_level": "raid1", 00:22:14.236 "superblock": true, 00:22:14.236 "num_base_bdevs": 4, 00:22:14.236 "num_base_bdevs_discovered": 3, 00:22:14.236 "num_base_bdevs_operational": 3, 00:22:14.236 "base_bdevs_list": [ 00:22:14.236 { 00:22:14.236 "name": null, 00:22:14.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.236 "is_configured": false, 00:22:14.236 "data_offset": 2048, 00:22:14.236 "data_size": 63488 00:22:14.236 }, 00:22:14.236 { 00:22:14.236 "name": "BaseBdev2", 00:22:14.236 "uuid": "ecafc9a4-7459-5358-b145-a38f290f34b4", 00:22:14.236 "is_configured": true, 00:22:14.236 "data_offset": 2048, 00:22:14.236 "data_size": 63488 00:22:14.236 }, 00:22:14.236 { 00:22:14.236 "name": "BaseBdev3", 00:22:14.236 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:14.236 "is_configured": true, 00:22:14.236 "data_offset": 2048, 00:22:14.236 "data_size": 63488 00:22:14.236 }, 00:22:14.236 { 00:22:14.236 "name": "BaseBdev4", 00:22:14.236 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:14.236 "is_configured": true, 00:22:14.236 "data_offset": 2048, 00:22:14.236 "data_size": 63488 00:22:14.236 } 00:22:14.236 ] 00:22:14.236 }' 00:22:14.236 13:45:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.236 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:14.494 13:45:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:14.753 [2024-07-10 13:45:53.941645] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:14.753 [2024-07-10 13:45:53.941709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:14.753 [2024-07-10 13:45:53.956314] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5170 00:22:14.753 [2024-07-10 13:45:53.958231] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:14.753 13:45:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.690 13:45:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.949 13:45:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.949 "name": "raid_bdev1", 00:22:15.949 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:15.949 "strip_size_kb": 0, 00:22:15.949 "state": "online", 00:22:15.949 "raid_level": "raid1", 00:22:15.949 "superblock": true, 00:22:15.949 "num_base_bdevs": 4, 00:22:15.949 "num_base_bdevs_discovered": 4, 00:22:15.949 "num_base_bdevs_operational": 4, 00:22:15.949 "process": { 00:22:15.949 "type": "rebuild", 00:22:15.949 "target": "spare", 00:22:15.949 "progress": { 00:22:15.949 "blocks": 24576, 00:22:15.949 "percent": 38 00:22:15.949 } 00:22:15.949 }, 00:22:15.949 "base_bdevs_list": [ 00:22:15.949 { 00:22:15.949 "name": "spare", 00:22:15.949 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:15.949 "is_configured": true, 00:22:15.949 "data_offset": 2048, 00:22:15.949 "data_size": 63488 00:22:15.949 }, 00:22:15.949 { 00:22:15.950 "name": "BaseBdev2", 00:22:15.950 "uuid": "ecafc9a4-7459-5358-b145-a38f290f34b4", 00:22:15.950 "is_configured": true, 00:22:15.950 "data_offset": 2048, 00:22:15.950 "data_size": 63488 00:22:15.950 }, 00:22:15.950 { 00:22:15.950 "name": "BaseBdev3", 00:22:15.950 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:15.950 "is_configured": true, 00:22:15.950 "data_offset": 2048, 00:22:15.950 "data_size": 63488 00:22:15.950 }, 00:22:15.950 { 00:22:15.950 "name": "BaseBdev4", 00:22:15.950 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:15.950 "is_configured": true, 00:22:15.950 "data_offset": 2048, 00:22:15.950 "data_size": 63488 00:22:15.950 } 00:22:15.950 ] 00:22:15.950 }' 00:22:15.950 13:45:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.950 13:45:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.950 13:45:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.950 13:45:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.950 13:45:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:16.209 [2024-07-10 13:45:55.494640] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:16.467 [2024-07-10 13:45:55.566502] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:16.467 [2024-07-10 13:45:55.566597] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.467 13:45:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.726 13:45:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.726 "name": "raid_bdev1", 00:22:16.726 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:16.726 "strip_size_kb": 0, 00:22:16.726 "state": "online", 00:22:16.726 "raid_level": "raid1", 00:22:16.726 "superblock": true, 00:22:16.726 "num_base_bdevs": 4, 00:22:16.726 "num_base_bdevs_discovered": 3, 00:22:16.726 "num_base_bdevs_operational": 3, 00:22:16.726 "base_bdevs_list": [ 00:22:16.726 { 00:22:16.726 "name": null, 00:22:16.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.726 "is_configured": false, 00:22:16.726 "data_offset": 2048, 00:22:16.726 "data_size": 63488 00:22:16.726 }, 00:22:16.726 { 00:22:16.726 "name": "BaseBdev2", 00:22:16.726 "uuid": "ecafc9a4-7459-5358-b145-a38f290f34b4", 00:22:16.726 "is_configured": true, 00:22:16.726 "data_offset": 2048, 00:22:16.726 "data_size": 63488 00:22:16.726 }, 00:22:16.726 { 00:22:16.726 "name": "BaseBdev3", 00:22:16.726 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:16.726 "is_configured": true, 00:22:16.726 "data_offset": 2048, 00:22:16.726 "data_size": 63488 00:22:16.726 }, 00:22:16.726 { 00:22:16.726 "name": "BaseBdev4", 00:22:16.726 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:16.726 "is_configured": true, 00:22:16.726 "data_offset": 2048, 00:22:16.726 "data_size": 63488 00:22:16.726 } 00:22:16.726 ] 00:22:16.726 }' 00:22:16.726 13:45:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.726 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.294 13:45:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.590 13:45:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:17.590 "name": "raid_bdev1", 00:22:17.590 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:17.590 "strip_size_kb": 0, 00:22:17.590 "state": "online", 00:22:17.590 "raid_level": "raid1", 00:22:17.590 "superblock": true, 00:22:17.590 "num_base_bdevs": 4, 00:22:17.590 "num_base_bdevs_discovered": 3, 00:22:17.590 "num_base_bdevs_operational": 3, 00:22:17.590 "base_bdevs_list": [ 00:22:17.590 { 00:22:17.590 "name": null, 00:22:17.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.590 "is_configured": false, 00:22:17.590 "data_offset": 2048, 00:22:17.590 "data_size": 63488 00:22:17.590 }, 00:22:17.590 { 00:22:17.590 "name": "BaseBdev2", 00:22:17.590 "uuid": "ecafc9a4-7459-5358-b145-a38f290f34b4", 00:22:17.590 "is_configured": true, 00:22:17.590 "data_offset": 2048, 00:22:17.590 "data_size": 63488 00:22:17.590 }, 00:22:17.590 { 00:22:17.590 "name": "BaseBdev3", 00:22:17.590 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:17.590 "is_configured": true, 00:22:17.590 "data_offset": 2048, 00:22:17.590 "data_size": 63488 00:22:17.590 }, 00:22:17.590 { 00:22:17.590 "name": "BaseBdev4", 00:22:17.590 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:17.590 "is_configured": true, 00:22:17.590 "data_offset": 2048, 00:22:17.590 "data_size": 63488 00:22:17.590 } 00:22:17.590 ] 00:22:17.590 }' 00:22:17.590 13:45:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.590 13:45:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:17.590 13:45:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.590 13:45:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:17.590 13:45:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:17.848 [2024-07-10 13:45:57.069702] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:17.848 [2024-07-10 13:45:57.069761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.848 [2024-07-10 13:45:57.085449] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5310 00:22:17.848 [2024-07-10 13:45:57.087366] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:17.848 13:45:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.783 13:45:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.043 13:45:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:19.043 "name": "raid_bdev1", 00:22:19.043 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:19.043 "strip_size_kb": 0, 00:22:19.043 "state": "online", 00:22:19.043 "raid_level": "raid1", 00:22:19.043 "superblock": true, 00:22:19.043 "num_base_bdevs": 4, 00:22:19.043 "num_base_bdevs_discovered": 4, 00:22:19.043 "num_base_bdevs_operational": 4, 00:22:19.043 "process": { 00:22:19.043 "type": "rebuild", 00:22:19.043 "target": "spare", 00:22:19.043 "progress": { 00:22:19.043 "blocks": 24576, 00:22:19.043 "percent": 38 00:22:19.043 } 00:22:19.043 }, 00:22:19.043 "base_bdevs_list": [ 00:22:19.043 { 00:22:19.043 "name": "spare", 00:22:19.043 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:19.043 "is_configured": true, 00:22:19.043 "data_offset": 2048, 00:22:19.043 "data_size": 63488 00:22:19.043 }, 00:22:19.043 { 00:22:19.043 "name": "BaseBdev2", 00:22:19.043 "uuid": "ecafc9a4-7459-5358-b145-a38f290f34b4", 00:22:19.043 "is_configured": true, 00:22:19.043 "data_offset": 2048, 00:22:19.043 "data_size": 63488 00:22:19.043 }, 00:22:19.043 { 00:22:19.043 "name": "BaseBdev3", 00:22:19.043 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:19.043 "is_configured": true, 00:22:19.043 "data_offset": 2048, 00:22:19.043 "data_size": 63488 00:22:19.043 }, 00:22:19.043 { 00:22:19.043 "name": "BaseBdev4", 00:22:19.043 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:19.043 "is_configured": true, 00:22:19.043 "data_offset": 2048, 00:22:19.043 "data_size": 63488 00:22:19.043 } 00:22:19.043 ] 00:22:19.043 }' 00:22:19.043 13:45:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:19.043 13:45:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.043 13:45:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:19.301 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:19.301 13:45:58 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:19.560 [2024-07-10 13:45:58.667187] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:19.560 [2024-07-10 13:45:58.695064] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5310 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.560 13:45:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:19.824 "name": "raid_bdev1", 00:22:19.824 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:19.824 "strip_size_kb": 0, 00:22:19.824 "state": "online", 00:22:19.824 "raid_level": "raid1", 00:22:19.824 "superblock": true, 00:22:19.824 "num_base_bdevs": 4, 00:22:19.824 "num_base_bdevs_discovered": 3, 00:22:19.824 "num_base_bdevs_operational": 3, 00:22:19.824 "process": { 00:22:19.824 "type": "rebuild", 00:22:19.824 "target": "spare", 00:22:19.824 "progress": { 00:22:19.824 "blocks": 38912, 00:22:19.824 "percent": 61 00:22:19.824 } 00:22:19.824 }, 00:22:19.824 "base_bdevs_list": [ 00:22:19.824 { 00:22:19.824 "name": "spare", 00:22:19.824 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:19.824 "is_configured": true, 00:22:19.824 "data_offset": 2048, 00:22:19.824 "data_size": 63488 00:22:19.824 }, 00:22:19.824 { 00:22:19.824 "name": null, 00:22:19.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.824 "is_configured": false, 00:22:19.824 "data_offset": 2048, 00:22:19.824 "data_size": 63488 00:22:19.824 }, 00:22:19.824 { 00:22:19.824 "name": "BaseBdev3", 00:22:19.824 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:19.824 "is_configured": true, 00:22:19.824 "data_offset": 2048, 00:22:19.824 "data_size": 63488 00:22:19.824 }, 00:22:19.824 { 00:22:19.824 "name": "BaseBdev4", 00:22:19.824 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:19.824 "is_configured": true, 00:22:19.824 "data_offset": 2048, 00:22:19.824 "data_size": 63488 00:22:19.824 } 00:22:19.824 ] 00:22:19.824 }' 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@657 -- # local timeout=474 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.824 13:45:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.090 13:45:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.090 "name": "raid_bdev1", 00:22:20.090 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:20.090 "strip_size_kb": 0, 00:22:20.090 "state": "online", 00:22:20.090 "raid_level": "raid1", 00:22:20.090 "superblock": true, 00:22:20.090 "num_base_bdevs": 4, 00:22:20.090 "num_base_bdevs_discovered": 3, 00:22:20.090 "num_base_bdevs_operational": 3, 00:22:20.090 "process": { 00:22:20.090 "type": "rebuild", 00:22:20.090 "target": "spare", 00:22:20.090 "progress": { 00:22:20.090 "blocks": 45056, 00:22:20.090 "percent": 70 00:22:20.090 } 00:22:20.090 }, 00:22:20.090 "base_bdevs_list": [ 00:22:20.090 { 00:22:20.090 "name": "spare", 00:22:20.090 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:20.090 "is_configured": true, 00:22:20.090 "data_offset": 2048, 00:22:20.090 "data_size": 63488 00:22:20.090 }, 00:22:20.090 { 00:22:20.090 "name": null, 00:22:20.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.090 "is_configured": false, 00:22:20.090 "data_offset": 2048, 00:22:20.090 "data_size": 63488 00:22:20.090 }, 00:22:20.090 { 00:22:20.090 "name": "BaseBdev3", 00:22:20.090 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:20.090 "is_configured": true, 00:22:20.090 "data_offset": 2048, 00:22:20.090 "data_size": 63488 00:22:20.090 }, 00:22:20.090 { 00:22:20.091 "name": "BaseBdev4", 00:22:20.091 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:20.091 "is_configured": true, 00:22:20.091 "data_offset": 2048, 00:22:20.091 "data_size": 63488 00:22:20.091 } 00:22:20.091 ] 00:22:20.091 }' 00:22:20.091 13:45:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.091 13:45:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.091 13:45:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.350 13:45:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.350 13:45:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:20.919 [2024-07-10 13:46:00.202661] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:20.919 [2024-07-10 13:46:00.202758] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:20.919 [2024-07-10 13:46:00.202947] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.177 13:46:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.436 13:46:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.436 "name": "raid_bdev1", 00:22:21.436 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:21.436 "strip_size_kb": 0, 00:22:21.436 "state": "online", 00:22:21.436 "raid_level": "raid1", 00:22:21.436 "superblock": true, 00:22:21.436 "num_base_bdevs": 4, 00:22:21.436 "num_base_bdevs_discovered": 3, 00:22:21.436 "num_base_bdevs_operational": 3, 00:22:21.436 "base_bdevs_list": [ 00:22:21.436 { 00:22:21.437 "name": "spare", 00:22:21.437 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:21.437 "is_configured": true, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 }, 00:22:21.437 { 00:22:21.437 "name": null, 00:22:21.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.437 "is_configured": false, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 }, 00:22:21.437 { 00:22:21.437 "name": "BaseBdev3", 00:22:21.437 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:21.437 "is_configured": true, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 }, 00:22:21.437 { 00:22:21.437 "name": "BaseBdev4", 00:22:21.437 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:21.437 "is_configured": true, 00:22:21.437 "data_offset": 2048, 00:22:21.437 "data_size": 63488 00:22:21.437 } 00:22:21.437 ] 00:22:21.437 }' 00:22:21.437 13:46:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.437 13:46:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:21.437 13:46:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@660 -- # break 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.697 13:46:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.697 13:46:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:21.697 "name": "raid_bdev1", 00:22:21.697 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:21.697 "strip_size_kb": 0, 00:22:21.697 "state": "online", 00:22:21.697 "raid_level": "raid1", 00:22:21.697 "superblock": true, 00:22:21.697 "num_base_bdevs": 4, 00:22:21.697 "num_base_bdevs_discovered": 3, 00:22:21.697 "num_base_bdevs_operational": 3, 00:22:21.697 "base_bdevs_list": [ 00:22:21.697 { 00:22:21.697 "name": "spare", 00:22:21.697 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:21.697 "is_configured": true, 00:22:21.697 "data_offset": 2048, 00:22:21.697 "data_size": 63488 00:22:21.697 }, 00:22:21.697 { 00:22:21.697 "name": null, 00:22:21.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.697 "is_configured": false, 00:22:21.697 "data_offset": 2048, 00:22:21.697 "data_size": 63488 00:22:21.697 }, 00:22:21.697 { 00:22:21.697 "name": "BaseBdev3", 00:22:21.697 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:21.697 "is_configured": true, 00:22:21.697 "data_offset": 2048, 00:22:21.697 "data_size": 63488 00:22:21.697 }, 00:22:21.697 { 00:22:21.697 "name": "BaseBdev4", 00:22:21.697 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:21.697 "is_configured": true, 00:22:21.697 "data_offset": 2048, 00:22:21.697 "data_size": 63488 00:22:21.697 } 00:22:21.697 ] 00:22:21.697 }' 00:22:21.697 13:46:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.957 13:46:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.217 13:46:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.217 "name": "raid_bdev1", 00:22:22.217 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:22.217 "strip_size_kb": 0, 00:22:22.217 "state": "online", 00:22:22.217 "raid_level": "raid1", 00:22:22.217 "superblock": true, 00:22:22.217 "num_base_bdevs": 4, 00:22:22.217 "num_base_bdevs_discovered": 3, 00:22:22.217 "num_base_bdevs_operational": 3, 00:22:22.217 "base_bdevs_list": [ 00:22:22.217 { 00:22:22.217 "name": "spare", 00:22:22.217 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:22.217 "is_configured": true, 00:22:22.217 "data_offset": 2048, 00:22:22.217 "data_size": 63488 00:22:22.217 }, 00:22:22.217 { 00:22:22.217 "name": null, 00:22:22.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.217 "is_configured": false, 00:22:22.217 "data_offset": 2048, 00:22:22.217 "data_size": 63488 00:22:22.217 }, 00:22:22.217 { 00:22:22.217 "name": "BaseBdev3", 00:22:22.217 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:22.217 "is_configured": true, 00:22:22.217 "data_offset": 2048, 00:22:22.217 "data_size": 63488 00:22:22.217 }, 00:22:22.217 { 00:22:22.217 "name": "BaseBdev4", 00:22:22.217 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:22.217 "is_configured": true, 00:22:22.217 "data_offset": 2048, 00:22:22.217 "data_size": 63488 00:22:22.217 } 00:22:22.217 ] 00:22:22.217 }' 00:22:22.217 13:46:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.217 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:22.783 13:46:02 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:23.042 [2024-07-10 13:46:02.283398] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:23.042 [2024-07-10 13:46:02.283442] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.042 [2024-07-10 13:46:02.283550] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.042 [2024-07-10 13:46:02.283639] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.042 [2024-07-10 13:46:02.283649] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:23.042 13:46:02 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.042 13:46:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:23.301 13:46:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:23.301 13:46:02 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:23.301 13:46:02 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@12 -- # local i 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:23.301 13:46:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:23.559 /dev/nbd0 00:22:23.559 13:46:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:23.559 13:46:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:23.559 13:46:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:23.559 13:46:02 -- common/autotest_common.sh@857 -- # local i 00:22:23.559 13:46:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:23.559 13:46:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:23.559 13:46:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:23.559 13:46:02 -- common/autotest_common.sh@861 -- # break 00:22:23.559 13:46:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:23.559 13:46:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:23.559 13:46:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.559 1+0 records in 00:22:23.559 1+0 records out 00:22:23.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582636 s, 7.0 MB/s 00:22:23.559 13:46:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.559 13:46:02 -- common/autotest_common.sh@874 -- # size=4096 00:22:23.559 13:46:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.559 13:46:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:23.559 13:46:02 -- common/autotest_common.sh@877 -- # return 0 00:22:23.559 13:46:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.559 13:46:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:23.559 13:46:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:23.818 /dev/nbd1 00:22:23.818 13:46:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:23.818 13:46:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:23.819 13:46:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:23.819 13:46:03 -- common/autotest_common.sh@857 -- # local i 00:22:23.819 13:46:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:23.819 13:46:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:23.819 13:46:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:23.819 13:46:03 -- common/autotest_common.sh@861 -- # break 00:22:23.819 13:46:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:23.819 13:46:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:23.819 13:46:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.819 1+0 records in 00:22:23.819 1+0 records out 00:22:23.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519086 s, 7.9 MB/s 00:22:23.819 13:46:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.819 13:46:03 -- common/autotest_common.sh@874 -- # size=4096 00:22:23.819 13:46:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.819 13:46:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:23.819 13:46:03 -- common/autotest_common.sh@877 -- # return 0 00:22:23.819 13:46:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.819 13:46:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:23.819 13:46:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:24.078 13:46:03 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:24.078 13:46:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:24.078 13:46:03 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:24.078 13:46:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.078 13:46:03 -- bdev/nbd_common.sh@51 -- # local i 00:22:24.078 13:46:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.078 13:46:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@41 -- # break 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.337 13:46:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.596 13:46:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:24.597 13:46:03 -- bdev/nbd_common.sh@41 -- # break 00:22:24.597 13:46:03 -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.597 13:46:03 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:24.597 13:46:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:24.597 13:46:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:24.597 13:46:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:24.855 13:46:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:25.115 [2024-07-10 13:46:04.397471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:25.115 [2024-07-10 13:46:04.397575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.115 [2024-07-10 13:46:04.397613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:25.115 [2024-07-10 13:46:04.397631] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.115 [2024-07-10 13:46:04.399893] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.115 [2024-07-10 13:46:04.399962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:25.115 [2024-07-10 13:46:04.400111] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:25.115 [2024-07-10 13:46:04.400187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.115 BaseBdev1 00:22:25.115 13:46:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:25.115 13:46:04 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:25.115 13:46:04 -- bdev/bdev_raid.sh@696 -- # continue 00:22:25.115 13:46:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:25.115 13:46:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:25.115 13:46:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:25.374 13:46:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:25.633 [2024-07-10 13:46:04.830734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:25.633 [2024-07-10 13:46:04.830850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.633 [2024-07-10 13:46:04.830894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:25.633 [2024-07-10 13:46:04.830912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.633 [2024-07-10 13:46:04.831391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.634 [2024-07-10 13:46:04.831449] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:25.634 [2024-07-10 13:46:04.831600] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:25.634 [2024-07-10 13:46:04.831616] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:25.634 [2024-07-10 13:46:04.831623] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:25.634 [2024-07-10 13:46:04.831644] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:25.634 [2024-07-10 13:46:04.831729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.634 BaseBdev3 00:22:25.634 13:46:04 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:25.634 13:46:04 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:25.634 13:46:04 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:25.899 13:46:05 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:26.167 [2024-07-10 13:46:05.279026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:26.167 [2024-07-10 13:46:05.279135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.167 [2024-07-10 13:46:05.279181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:26.167 [2024-07-10 13:46:05.279210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.167 [2024-07-10 13:46:05.279760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.167 [2024-07-10 13:46:05.279824] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:26.167 [2024-07-10 13:46:05.279946] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:26.167 [2024-07-10 13:46:05.279997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:26.167 BaseBdev4 00:22:26.167 13:46:05 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:26.167 13:46:05 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:26.427 [2024-07-10 13:46:05.722280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:26.427 [2024-07-10 13:46:05.722378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.427 [2024-07-10 13:46:05.722411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:26.427 [2024-07-10 13:46:05.722437] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.427 [2024-07-10 13:46:05.722939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.427 [2024-07-10 13:46:05.722994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:26.427 [2024-07-10 13:46:05.723128] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:26.427 [2024-07-10 13:46:05.723174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.427 spare 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.427 13:46:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.687 [2024-07-10 13:46:05.823105] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:26.687 [2024-07-10 13:46:05.823151] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:26.687 [2024-07-10 13:46:05.823330] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:22:26.687 [2024-07-10 13:46:05.823763] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:26.687 [2024-07-10 13:46:05.823787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:26.687 [2024-07-10 13:46:05.823960] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.687 13:46:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.687 "name": "raid_bdev1", 00:22:26.687 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:26.687 "strip_size_kb": 0, 00:22:26.687 "state": "online", 00:22:26.687 "raid_level": "raid1", 00:22:26.687 "superblock": true, 00:22:26.687 "num_base_bdevs": 4, 00:22:26.687 "num_base_bdevs_discovered": 3, 00:22:26.687 "num_base_bdevs_operational": 3, 00:22:26.687 "base_bdevs_list": [ 00:22:26.687 { 00:22:26.687 "name": "spare", 00:22:26.687 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:26.687 "is_configured": true, 00:22:26.687 "data_offset": 2048, 00:22:26.687 "data_size": 63488 00:22:26.687 }, 00:22:26.687 { 00:22:26.687 "name": null, 00:22:26.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.687 "is_configured": false, 00:22:26.687 "data_offset": 2048, 00:22:26.687 "data_size": 63488 00:22:26.687 }, 00:22:26.687 { 00:22:26.687 "name": "BaseBdev3", 00:22:26.687 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:26.687 "is_configured": true, 00:22:26.687 "data_offset": 2048, 00:22:26.687 "data_size": 63488 00:22:26.687 }, 00:22:26.687 { 00:22:26.687 "name": "BaseBdev4", 00:22:26.687 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:26.687 "is_configured": true, 00:22:26.687 "data_offset": 2048, 00:22:26.687 "data_size": 63488 00:22:26.687 } 00:22:26.687 ] 00:22:26.687 }' 00:22:26.687 13:46:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.687 13:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.625 "name": "raid_bdev1", 00:22:27.625 "uuid": "8bee7aa7-b253-422d-b3dd-96ab8afa54bd", 00:22:27.625 "strip_size_kb": 0, 00:22:27.625 "state": "online", 00:22:27.625 "raid_level": "raid1", 00:22:27.625 "superblock": true, 00:22:27.625 "num_base_bdevs": 4, 00:22:27.625 "num_base_bdevs_discovered": 3, 00:22:27.625 "num_base_bdevs_operational": 3, 00:22:27.625 "base_bdevs_list": [ 00:22:27.625 { 00:22:27.625 "name": "spare", 00:22:27.625 "uuid": "907a3747-1824-5e41-aa88-bda454e3f33e", 00:22:27.625 "is_configured": true, 00:22:27.625 "data_offset": 2048, 00:22:27.625 "data_size": 63488 00:22:27.625 }, 00:22:27.625 { 00:22:27.625 "name": null, 00:22:27.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.625 "is_configured": false, 00:22:27.625 "data_offset": 2048, 00:22:27.625 "data_size": 63488 00:22:27.625 }, 00:22:27.625 { 00:22:27.625 "name": "BaseBdev3", 00:22:27.625 "uuid": "a96f6e60-39cb-5c04-a668-15d6f59237fb", 00:22:27.625 "is_configured": true, 00:22:27.625 "data_offset": 2048, 00:22:27.625 "data_size": 63488 00:22:27.625 }, 00:22:27.625 { 00:22:27.625 "name": "BaseBdev4", 00:22:27.625 "uuid": "be12aaad-84c0-5eee-ac08-2284d6ff7536", 00:22:27.625 "is_configured": true, 00:22:27.625 "data_offset": 2048, 00:22:27.625 "data_size": 63488 00:22:27.625 } 00:22:27.625 ] 00:22:27.625 }' 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.625 13:46:06 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:27.884 13:46:07 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.884 13:46:07 -- bdev/bdev_raid.sh@709 -- # killprocess 128573 00:22:27.884 13:46:07 -- common/autotest_common.sh@926 -- # '[' -z 128573 ']' 00:22:27.884 13:46:07 -- common/autotest_common.sh@930 -- # kill -0 128573 00:22:27.884 13:46:07 -- common/autotest_common.sh@931 -- # uname 00:22:27.884 13:46:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.884 13:46:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128573 00:22:27.884 13:46:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:27.884 13:46:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:27.884 killing process with pid 128573 00:22:27.884 13:46:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128573' 00:22:27.884 13:46:07 -- common/autotest_common.sh@945 -- # kill 128573 00:22:27.884 Received shutdown signal, test time was about 60.000000 seconds 00:22:27.884 00:22:27.884 Latency(us) 00:22:27.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.884 =================================================================================================================== 00:22:27.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.884 13:46:07 -- common/autotest_common.sh@950 -- # wait 128573 00:22:27.884 [2024-07-10 13:46:07.202522] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:27.884 [2024-07-10 13:46:07.202618] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.884 [2024-07-10 13:46:07.202711] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.884 [2024-07-10 13:46:07.202725] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:22:28.454 [2024-07-10 13:46:07.749889] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:29.853 13:46:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:29.853 00:22:29.853 real 0m27.286s 00:22:29.853 user 0m39.815s 00:22:29.853 sys 0m4.193s 00:22:29.853 13:46:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.853 13:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:29.853 ************************************ 00:22:29.853 END TEST raid_rebuild_test_sb 00:22:29.853 ************************************ 00:22:29.853 13:46:09 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:29.853 13:46:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:29.853 13:46:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:29.853 13:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.113 ************************************ 00:22:30.113 START TEST raid_rebuild_test_io 00:22:30.113 ************************************ 00:22:30.113 13:46:09 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=129262 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129262 /var/tmp/spdk-raid.sock 00:22:30.113 13:46:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:30.113 13:46:09 -- common/autotest_common.sh@819 -- # '[' -z 129262 ']' 00:22:30.113 13:46:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:30.113 13:46:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.113 13:46:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:30.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:30.113 13:46:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.113 13:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.113 [2024-07-10 13:46:09.292222] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:30.113 [2024-07-10 13:46:09.292360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129262 ] 00:22:30.113 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:30.113 Zero copy mechanism will not be used. 00:22:30.113 [2024-07-10 13:46:09.450859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.372 [2024-07-10 13:46:09.647403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.631 [2024-07-10 13:46:09.854933] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.890 13:46:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:30.890 13:46:10 -- common/autotest_common.sh@852 -- # return 0 00:22:30.890 13:46:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:30.890 13:46:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:30.890 13:46:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:31.148 BaseBdev1 00:22:31.148 13:46:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:31.148 13:46:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:31.148 13:46:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:31.406 BaseBdev2 00:22:31.406 13:46:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:31.406 13:46:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:31.406 13:46:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:31.666 BaseBdev3 00:22:31.666 13:46:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:31.666 13:46:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:31.666 13:46:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:31.925 BaseBdev4 00:22:31.925 13:46:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:32.184 spare_malloc 00:22:32.184 13:46:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:32.184 spare_delay 00:22:32.184 13:46:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:32.443 [2024-07-10 13:46:11.701120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:32.443 [2024-07-10 13:46:11.701213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.443 [2024-07-10 13:46:11.701246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:32.443 [2024-07-10 13:46:11.701297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.443 [2024-07-10 13:46:11.703386] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.443 [2024-07-10 13:46:11.703431] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:32.443 spare 00:22:32.443 13:46:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:32.703 [2024-07-10 13:46:11.881020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.703 [2024-07-10 13:46:11.883938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.703 [2024-07-10 13:46:11.883989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:32.703 [2024-07-10 13:46:11.884020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:32.703 [2024-07-10 13:46:11.884110] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:32.703 [2024-07-10 13:46:11.884124] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:32.703 [2024-07-10 13:46:11.884293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:32.703 [2024-07-10 13:46:11.884640] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:32.703 [2024-07-10 13:46:11.884659] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:32.703 [2024-07-10 13:46:11.884905] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.703 13:46:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.962 13:46:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.962 "name": "raid_bdev1", 00:22:32.962 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:32.962 "strip_size_kb": 0, 00:22:32.962 "state": "online", 00:22:32.962 "raid_level": "raid1", 00:22:32.962 "superblock": false, 00:22:32.962 "num_base_bdevs": 4, 00:22:32.962 "num_base_bdevs_discovered": 4, 00:22:32.962 "num_base_bdevs_operational": 4, 00:22:32.962 "base_bdevs_list": [ 00:22:32.962 { 00:22:32.962 "name": "BaseBdev1", 00:22:32.962 "uuid": "47c17681-5b41-4a1d-951f-0c8d787f162a", 00:22:32.962 "is_configured": true, 00:22:32.962 "data_offset": 0, 00:22:32.962 "data_size": 65536 00:22:32.962 }, 00:22:32.962 { 00:22:32.962 "name": "BaseBdev2", 00:22:32.962 "uuid": "70ed3ed1-5a7a-42d1-a3e8-157f05249883", 00:22:32.962 "is_configured": true, 00:22:32.962 "data_offset": 0, 00:22:32.962 "data_size": 65536 00:22:32.962 }, 00:22:32.962 { 00:22:32.962 "name": "BaseBdev3", 00:22:32.962 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:32.962 "is_configured": true, 00:22:32.962 "data_offset": 0, 00:22:32.962 "data_size": 65536 00:22:32.962 }, 00:22:32.962 { 00:22:32.962 "name": "BaseBdev4", 00:22:32.962 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:32.962 "is_configured": true, 00:22:32.962 "data_offset": 0, 00:22:32.962 "data_size": 65536 00:22:32.962 } 00:22:32.962 ] 00:22:32.962 }' 00:22:32.962 13:46:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.962 13:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:33.594 13:46:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:33.594 13:46:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:33.594 [2024-07-10 13:46:12.847491] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.594 13:46:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:33.594 13:46:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.594 13:46:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:33.850 13:46:13 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:33.850 13:46:13 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:33.850 13:46:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:33.850 13:46:13 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:33.850 [2024-07-10 13:46:13.141790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:33.850 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:33.850 Zero copy mechanism will not be used. 00:22:33.850 Running I/O for 60 seconds... 00:22:34.108 [2024-07-10 13:46:13.242000] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:34.108 [2024-07-10 13:46:13.247692] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.108 13:46:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.367 13:46:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.367 "name": "raid_bdev1", 00:22:34.367 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:34.367 "strip_size_kb": 0, 00:22:34.367 "state": "online", 00:22:34.367 "raid_level": "raid1", 00:22:34.367 "superblock": false, 00:22:34.367 "num_base_bdevs": 4, 00:22:34.367 "num_base_bdevs_discovered": 3, 00:22:34.367 "num_base_bdevs_operational": 3, 00:22:34.367 "base_bdevs_list": [ 00:22:34.367 { 00:22:34.367 "name": null, 00:22:34.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.367 "is_configured": false, 00:22:34.367 "data_offset": 0, 00:22:34.367 "data_size": 65536 00:22:34.367 }, 00:22:34.367 { 00:22:34.367 "name": "BaseBdev2", 00:22:34.367 "uuid": "70ed3ed1-5a7a-42d1-a3e8-157f05249883", 00:22:34.367 "is_configured": true, 00:22:34.367 "data_offset": 0, 00:22:34.367 "data_size": 65536 00:22:34.367 }, 00:22:34.367 { 00:22:34.367 "name": "BaseBdev3", 00:22:34.367 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:34.367 "is_configured": true, 00:22:34.367 "data_offset": 0, 00:22:34.367 "data_size": 65536 00:22:34.367 }, 00:22:34.367 { 00:22:34.367 "name": "BaseBdev4", 00:22:34.367 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:34.367 "is_configured": true, 00:22:34.367 "data_offset": 0, 00:22:34.367 "data_size": 65536 00:22:34.367 } 00:22:34.367 ] 00:22:34.367 }' 00:22:34.367 13:46:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.367 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:22:34.934 13:46:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:35.193 [2024-07-10 13:46:14.329549] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:35.193 [2024-07-10 13:46:14.329643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:35.193 13:46:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:35.193 [2024-07-10 13:46:14.400989] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:35.193 [2024-07-10 13:46:14.403033] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:35.193 [2024-07-10 13:46:14.521550] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:35.193 [2024-07-10 13:46:14.522151] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:35.451 [2024-07-10 13:46:14.740835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:35.451 [2024-07-10 13:46:14.741606] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:38.570 [2024-07-10 13:46:15.104232] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.570 [2024-07-10 13:46:15.446901] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:38.570 [2024-07-10 13:46:15.448452] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:38.570 "name": "raid_bdev1", 00:22:38.570 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:38.570 "strip_size_kb": 0, 00:22:38.570 "state": "online", 00:22:38.570 "raid_level": "raid1", 00:22:38.570 "superblock": false, 00:22:38.570 "num_base_bdevs": 4, 00:22:38.570 "num_base_bdevs_discovered": 4, 00:22:38.570 "num_base_bdevs_operational": 4, 00:22:38.570 "process": { 00:22:38.570 "type": "rebuild", 00:22:38.570 "target": "spare", 00:22:38.570 "progress": { 00:22:38.570 "blocks": 14336, 00:22:38.570 "percent": 21 00:22:38.570 } 00:22:38.570 }, 00:22:38.570 "base_bdevs_list": [ 00:22:38.570 { 00:22:38.570 "name": "spare", 00:22:38.570 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:38.570 "is_configured": true, 00:22:38.570 "data_offset": 0, 00:22:38.570 "data_size": 65536 00:22:38.570 }, 00:22:38.570 { 00:22:38.570 "name": "BaseBdev2", 00:22:38.570 "uuid": "70ed3ed1-5a7a-42d1-a3e8-157f05249883", 00:22:38.570 "is_configured": true, 00:22:38.570 "data_offset": 0, 00:22:38.570 "data_size": 65536 00:22:38.570 }, 00:22:38.570 { 00:22:38.570 "name": "BaseBdev3", 00:22:38.570 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:38.570 "is_configured": true, 00:22:38.570 "data_offset": 0, 00:22:38.570 "data_size": 65536 00:22:38.570 }, 00:22:38.570 { 00:22:38.570 "name": "BaseBdev4", 00:22:38.570 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:38.570 "is_configured": true, 00:22:38.570 "data_offset": 0, 00:22:38.570 "data_size": 65536 00:22:38.570 } 00:22:38.570 ] 00:22:38.570 }' 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.570 13:46:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:38.570 [2024-07-10 13:46:15.701263] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:38.570 [2024-07-10 13:46:15.858584] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:38.570 [2024-07-10 13:46:16.022493] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:38.570 [2024-07-10 13:46:16.032419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.570 [2024-07-10 13:46:16.056357] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.570 13:46:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.570 "name": "raid_bdev1", 00:22:38.570 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:38.570 "strip_size_kb": 0, 00:22:38.570 "state": "online", 00:22:38.570 "raid_level": "raid1", 00:22:38.570 "superblock": false, 00:22:38.570 "num_base_bdevs": 4, 00:22:38.570 "num_base_bdevs_discovered": 3, 00:22:38.570 "num_base_bdevs_operational": 3, 00:22:38.570 "base_bdevs_list": [ 00:22:38.570 { 00:22:38.570 "name": null, 00:22:38.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.570 "is_configured": false, 00:22:38.570 "data_offset": 0, 00:22:38.570 "data_size": 65536 00:22:38.570 }, 00:22:38.570 { 00:22:38.570 "name": "BaseBdev2", 00:22:38.570 "uuid": "70ed3ed1-5a7a-42d1-a3e8-157f05249883", 00:22:38.570 "is_configured": true, 00:22:38.570 "data_offset": 0, 00:22:38.570 "data_size": 65536 00:22:38.570 }, 00:22:38.570 { 00:22:38.570 "name": "BaseBdev3", 00:22:38.570 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:38.571 "is_configured": true, 00:22:38.571 "data_offset": 0, 00:22:38.571 "data_size": 65536 00:22:38.571 }, 00:22:38.571 { 00:22:38.571 "name": "BaseBdev4", 00:22:38.571 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:38.571 "is_configured": true, 00:22:38.571 "data_offset": 0, 00:22:38.571 "data_size": 65536 00:22:38.571 } 00:22:38.571 ] 00:22:38.571 }' 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.571 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.571 13:46:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:38.571 "name": "raid_bdev1", 00:22:38.571 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:38.571 "strip_size_kb": 0, 00:22:38.571 "state": "online", 00:22:38.571 "raid_level": "raid1", 00:22:38.571 "superblock": false, 00:22:38.571 "num_base_bdevs": 4, 00:22:38.571 "num_base_bdevs_discovered": 3, 00:22:38.571 "num_base_bdevs_operational": 3, 00:22:38.571 "base_bdevs_list": [ 00:22:38.571 { 00:22:38.571 "name": null, 00:22:38.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.571 "is_configured": false, 00:22:38.571 "data_offset": 0, 00:22:38.571 "data_size": 65536 00:22:38.571 }, 00:22:38.571 { 00:22:38.571 "name": "BaseBdev2", 00:22:38.571 "uuid": "70ed3ed1-5a7a-42d1-a3e8-157f05249883", 00:22:38.571 "is_configured": true, 00:22:38.571 "data_offset": 0, 00:22:38.571 "data_size": 65536 00:22:38.571 }, 00:22:38.571 { 00:22:38.571 "name": "BaseBdev3", 00:22:38.571 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:38.571 "is_configured": true, 00:22:38.571 "data_offset": 0, 00:22:38.571 "data_size": 65536 00:22:38.571 }, 00:22:38.571 { 00:22:38.571 "name": "BaseBdev4", 00:22:38.571 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:38.571 "is_configured": true, 00:22:38.571 "data_offset": 0, 00:22:38.571 "data_size": 65536 00:22:38.571 } 00:22:38.571 ] 00:22:38.571 }' 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:38.571 [2024-07-10 13:46:17.491174] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:38.571 [2024-07-10 13:46:17.491243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.571 [2024-07-10 13:46:17.547812] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:38.571 13:46:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:38.571 [2024-07-10 13:46:17.549706] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:38.571 [2024-07-10 13:46:17.701461] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:38.852 [2024-07-10 13:46:17.940820] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:38.852 [2024-07-10 13:46:17.941168] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:39.111 [2024-07-10 13:46:18.212940] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:39.111 [2024-07-10 13:46:18.213576] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:39.111 [2024-07-10 13:46:18.434786] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:39.111 [2024-07-10 13:46:18.435632] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.371 13:46:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.631 "name": "raid_bdev1", 00:22:39.631 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:39.631 "strip_size_kb": 0, 00:22:39.631 "state": "online", 00:22:39.631 "raid_level": "raid1", 00:22:39.631 "superblock": false, 00:22:39.631 "num_base_bdevs": 4, 00:22:39.631 "num_base_bdevs_discovered": 4, 00:22:39.631 "num_base_bdevs_operational": 4, 00:22:39.631 "process": { 00:22:39.631 "type": "rebuild", 00:22:39.631 "target": "spare", 00:22:39.631 "progress": { 00:22:39.631 "blocks": 12288, 00:22:39.631 "percent": 18 00:22:39.631 } 00:22:39.631 }, 00:22:39.631 "base_bdevs_list": [ 00:22:39.631 { 00:22:39.631 "name": "spare", 00:22:39.631 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:39.631 "is_configured": true, 00:22:39.631 "data_offset": 0, 00:22:39.631 "data_size": 65536 00:22:39.631 }, 00:22:39.631 { 00:22:39.631 "name": "BaseBdev2", 00:22:39.631 "uuid": "70ed3ed1-5a7a-42d1-a3e8-157f05249883", 00:22:39.631 "is_configured": true, 00:22:39.631 "data_offset": 0, 00:22:39.631 "data_size": 65536 00:22:39.631 }, 00:22:39.631 { 00:22:39.631 "name": "BaseBdev3", 00:22:39.631 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:39.631 "is_configured": true, 00:22:39.631 "data_offset": 0, 00:22:39.631 "data_size": 65536 00:22:39.631 }, 00:22:39.631 { 00:22:39.631 "name": "BaseBdev4", 00:22:39.631 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:39.631 "is_configured": true, 00:22:39.631 "data_offset": 0, 00:22:39.631 "data_size": 65536 00:22:39.631 } 00:22:39.631 ] 00:22:39.631 }' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:39.631 13:46:18 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:39.889 [2024-07-10 13:46:19.104928] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:39.889 [2024-07-10 13:46:19.177914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:39.889 [2024-07-10 13:46:19.186572] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:22:39.889 [2024-07-10 13:46:19.186624] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.889 13:46:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.148 13:46:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.148 "name": "raid_bdev1", 00:22:40.148 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:40.148 "strip_size_kb": 0, 00:22:40.148 "state": "online", 00:22:40.148 "raid_level": "raid1", 00:22:40.148 "superblock": false, 00:22:40.148 "num_base_bdevs": 4, 00:22:40.148 "num_base_bdevs_discovered": 3, 00:22:40.148 "num_base_bdevs_operational": 3, 00:22:40.148 "process": { 00:22:40.148 "type": "rebuild", 00:22:40.148 "target": "spare", 00:22:40.148 "progress": { 00:22:40.148 "blocks": 24576, 00:22:40.148 "percent": 37 00:22:40.148 } 00:22:40.148 }, 00:22:40.148 "base_bdevs_list": [ 00:22:40.148 { 00:22:40.148 "name": "spare", 00:22:40.148 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:40.148 "is_configured": true, 00:22:40.148 "data_offset": 0, 00:22:40.148 "data_size": 65536 00:22:40.148 }, 00:22:40.148 { 00:22:40.148 "name": null, 00:22:40.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.148 "is_configured": false, 00:22:40.148 "data_offset": 0, 00:22:40.148 "data_size": 65536 00:22:40.148 }, 00:22:40.148 { 00:22:40.148 "name": "BaseBdev3", 00:22:40.148 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:40.148 "is_configured": true, 00:22:40.148 "data_offset": 0, 00:22:40.148 "data_size": 65536 00:22:40.148 }, 00:22:40.148 { 00:22:40.148 "name": "BaseBdev4", 00:22:40.148 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:40.148 "is_configured": true, 00:22:40.148 "data_offset": 0, 00:22:40.148 "data_size": 65536 00:22:40.148 } 00:22:40.148 ] 00:22:40.148 }' 00:22:40.149 13:46:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.407 [2024-07-10 13:46:19.519389] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@657 -- # local timeout=494 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.407 13:46:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.668 13:46:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.668 "name": "raid_bdev1", 00:22:40.668 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:40.668 "strip_size_kb": 0, 00:22:40.668 "state": "online", 00:22:40.668 "raid_level": "raid1", 00:22:40.668 "superblock": false, 00:22:40.668 "num_base_bdevs": 4, 00:22:40.668 "num_base_bdevs_discovered": 3, 00:22:40.668 "num_base_bdevs_operational": 3, 00:22:40.668 "process": { 00:22:40.668 "type": "rebuild", 00:22:40.668 "target": "spare", 00:22:40.668 "progress": { 00:22:40.668 "blocks": 28672, 00:22:40.668 "percent": 43 00:22:40.668 } 00:22:40.668 }, 00:22:40.668 "base_bdevs_list": [ 00:22:40.668 { 00:22:40.668 "name": "spare", 00:22:40.668 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:40.668 "is_configured": true, 00:22:40.668 "data_offset": 0, 00:22:40.668 "data_size": 65536 00:22:40.668 }, 00:22:40.668 { 00:22:40.668 "name": null, 00:22:40.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.668 "is_configured": false, 00:22:40.668 "data_offset": 0, 00:22:40.668 "data_size": 65536 00:22:40.668 }, 00:22:40.668 { 00:22:40.668 "name": "BaseBdev3", 00:22:40.668 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:40.668 "is_configured": true, 00:22:40.668 "data_offset": 0, 00:22:40.668 "data_size": 65536 00:22:40.668 }, 00:22:40.668 { 00:22:40.668 "name": "BaseBdev4", 00:22:40.668 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:40.668 "is_configured": true, 00:22:40.668 "data_offset": 0, 00:22:40.668 "data_size": 65536 00:22:40.668 } 00:22:40.668 ] 00:22:40.668 }' 00:22:40.668 13:46:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.668 13:46:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.668 13:46:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.668 13:46:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.668 13:46:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:40.668 [2024-07-10 13:46:20.010068] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:40.668 [2024-07-10 13:46:20.018700] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:40.928 [2024-07-10 13:46:20.224250] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:41.498 [2024-07-10 13:46:20.602759] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:41.498 [2024-07-10 13:46:20.833646] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.757 13:46:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.016 13:46:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:42.016 "name": "raid_bdev1", 00:22:42.016 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:42.016 "strip_size_kb": 0, 00:22:42.016 "state": "online", 00:22:42.016 "raid_level": "raid1", 00:22:42.016 "superblock": false, 00:22:42.016 "num_base_bdevs": 4, 00:22:42.016 "num_base_bdevs_discovered": 3, 00:22:42.016 "num_base_bdevs_operational": 3, 00:22:42.016 "process": { 00:22:42.016 "type": "rebuild", 00:22:42.016 "target": "spare", 00:22:42.016 "progress": { 00:22:42.016 "blocks": 49152, 00:22:42.016 "percent": 75 00:22:42.016 } 00:22:42.016 }, 00:22:42.016 "base_bdevs_list": [ 00:22:42.016 { 00:22:42.016 "name": "spare", 00:22:42.016 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:42.016 "is_configured": true, 00:22:42.016 "data_offset": 0, 00:22:42.016 "data_size": 65536 00:22:42.016 }, 00:22:42.016 { 00:22:42.016 "name": null, 00:22:42.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.016 "is_configured": false, 00:22:42.016 "data_offset": 0, 00:22:42.016 "data_size": 65536 00:22:42.016 }, 00:22:42.016 { 00:22:42.016 "name": "BaseBdev3", 00:22:42.016 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:42.016 "is_configured": true, 00:22:42.016 "data_offset": 0, 00:22:42.016 "data_size": 65536 00:22:42.016 }, 00:22:42.016 { 00:22:42.016 "name": "BaseBdev4", 00:22:42.016 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:42.016 "is_configured": true, 00:22:42.016 "data_offset": 0, 00:22:42.016 "data_size": 65536 00:22:42.016 } 00:22:42.016 ] 00:22:42.016 }' 00:22:42.016 13:46:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:42.016 [2024-07-10 13:46:21.182236] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:42.016 13:46:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:42.016 13:46:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:42.016 13:46:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:42.016 13:46:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:42.958 [2024-07-10 13:46:21.971555] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:42.958 [2024-07-10 13:46:22.055406] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:42.958 [2024-07-10 13:46:22.058707] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.958 13:46:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.218 13:46:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.218 "name": "raid_bdev1", 00:22:43.218 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:43.218 "strip_size_kb": 0, 00:22:43.218 "state": "online", 00:22:43.218 "raid_level": "raid1", 00:22:43.218 "superblock": false, 00:22:43.218 "num_base_bdevs": 4, 00:22:43.218 "num_base_bdevs_discovered": 3, 00:22:43.218 "num_base_bdevs_operational": 3, 00:22:43.218 "base_bdevs_list": [ 00:22:43.218 { 00:22:43.218 "name": "spare", 00:22:43.218 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:43.218 "is_configured": true, 00:22:43.218 "data_offset": 0, 00:22:43.218 "data_size": 65536 00:22:43.218 }, 00:22:43.218 { 00:22:43.218 "name": null, 00:22:43.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.218 "is_configured": false, 00:22:43.218 "data_offset": 0, 00:22:43.218 "data_size": 65536 00:22:43.218 }, 00:22:43.218 { 00:22:43.218 "name": "BaseBdev3", 00:22:43.218 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:43.218 "is_configured": true, 00:22:43.218 "data_offset": 0, 00:22:43.218 "data_size": 65536 00:22:43.218 }, 00:22:43.218 { 00:22:43.218 "name": "BaseBdev4", 00:22:43.218 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:43.218 "is_configured": true, 00:22:43.218 "data_offset": 0, 00:22:43.218 "data_size": 65536 00:22:43.218 } 00:22:43.218 ] 00:22:43.218 }' 00:22:43.218 13:46:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.218 13:46:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:43.218 13:46:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@660 -- # break 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:43.477 "name": "raid_bdev1", 00:22:43.477 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:43.477 "strip_size_kb": 0, 00:22:43.477 "state": "online", 00:22:43.477 "raid_level": "raid1", 00:22:43.477 "superblock": false, 00:22:43.477 "num_base_bdevs": 4, 00:22:43.477 "num_base_bdevs_discovered": 3, 00:22:43.477 "num_base_bdevs_operational": 3, 00:22:43.477 "base_bdevs_list": [ 00:22:43.477 { 00:22:43.477 "name": "spare", 00:22:43.477 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:43.477 "is_configured": true, 00:22:43.477 "data_offset": 0, 00:22:43.477 "data_size": 65536 00:22:43.477 }, 00:22:43.477 { 00:22:43.477 "name": null, 00:22:43.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.477 "is_configured": false, 00:22:43.477 "data_offset": 0, 00:22:43.477 "data_size": 65536 00:22:43.477 }, 00:22:43.477 { 00:22:43.477 "name": "BaseBdev3", 00:22:43.477 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:43.477 "is_configured": true, 00:22:43.477 "data_offset": 0, 00:22:43.477 "data_size": 65536 00:22:43.477 }, 00:22:43.477 { 00:22:43.477 "name": "BaseBdev4", 00:22:43.477 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:43.477 "is_configured": true, 00:22:43.477 "data_offset": 0, 00:22:43.477 "data_size": 65536 00:22:43.477 } 00:22:43.477 ] 00:22:43.477 }' 00:22:43.477 13:46:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:43.746 13:46:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:43.746 13:46:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:43.746 13:46:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.747 13:46:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.007 13:46:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.007 "name": "raid_bdev1", 00:22:44.007 "uuid": "2b9d7a5f-21da-4399-9e63-1496653def6b", 00:22:44.007 "strip_size_kb": 0, 00:22:44.007 "state": "online", 00:22:44.007 "raid_level": "raid1", 00:22:44.007 "superblock": false, 00:22:44.007 "num_base_bdevs": 4, 00:22:44.007 "num_base_bdevs_discovered": 3, 00:22:44.007 "num_base_bdevs_operational": 3, 00:22:44.007 "base_bdevs_list": [ 00:22:44.007 { 00:22:44.007 "name": "spare", 00:22:44.007 "uuid": "d02241f8-c7fb-5c15-90f4-bb68d085c167", 00:22:44.007 "is_configured": true, 00:22:44.007 "data_offset": 0, 00:22:44.007 "data_size": 65536 00:22:44.007 }, 00:22:44.007 { 00:22:44.007 "name": null, 00:22:44.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.007 "is_configured": false, 00:22:44.007 "data_offset": 0, 00:22:44.007 "data_size": 65536 00:22:44.007 }, 00:22:44.007 { 00:22:44.007 "name": "BaseBdev3", 00:22:44.007 "uuid": "d0c2f0d8-4ca7-4e91-bcf4-3d80aa86b300", 00:22:44.007 "is_configured": true, 00:22:44.007 "data_offset": 0, 00:22:44.007 "data_size": 65536 00:22:44.007 }, 00:22:44.007 { 00:22:44.007 "name": "BaseBdev4", 00:22:44.007 "uuid": "d6664c35-0a64-4e2a-b553-8129aecad9e6", 00:22:44.007 "is_configured": true, 00:22:44.007 "data_offset": 0, 00:22:44.007 "data_size": 65536 00:22:44.007 } 00:22:44.007 ] 00:22:44.007 }' 00:22:44.007 13:46:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.007 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:22:44.579 13:46:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:44.837 [2024-07-10 13:46:24.040441] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.837 [2024-07-10 13:46:24.040497] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:44.837 00:22:44.837 Latency(us) 00:22:44.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.837 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:44.838 raid_bdev1 : 11.02 104.74 314.22 0.00 0.00 13561.49 389.92 120883.87 00:22:44.838 =================================================================================================================== 00:22:44.838 Total : 104.74 314.22 0.00 0.00 13561.49 389.92 120883.87 00:22:44.838 [2024-07-10 13:46:24.166288] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.838 [2024-07-10 13:46:24.166352] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.838 [2024-07-10 13:46:24.166464] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.838 [2024-07-10 13:46:24.166484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:44.838 0 00:22:44.838 13:46:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:44.838 13:46:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.097 13:46:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:45.097 13:46:24 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:45.097 13:46:24 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@12 -- # local i 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.097 13:46:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:45.357 /dev/nbd0 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:45.357 13:46:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:45.357 13:46:24 -- common/autotest_common.sh@857 -- # local i 00:22:45.357 13:46:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:45.357 13:46:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:45.357 13:46:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:45.357 13:46:24 -- common/autotest_common.sh@861 -- # break 00:22:45.357 13:46:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:45.357 13:46:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:45.357 13:46:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:45.357 1+0 records in 00:22:45.357 1+0 records out 00:22:45.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254841 s, 16.1 MB/s 00:22:45.357 13:46:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.357 13:46:24 -- common/autotest_common.sh@874 -- # size=4096 00:22:45.357 13:46:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.357 13:46:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:45.357 13:46:24 -- common/autotest_common.sh@877 -- # return 0 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.357 13:46:24 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:45.357 13:46:24 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:45.357 13:46:24 -- bdev/bdev_raid.sh@678 -- # continue 00:22:45.357 13:46:24 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:45.357 13:46:24 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:45.357 13:46:24 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@12 -- # local i 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.357 13:46:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:45.615 /dev/nbd1 00:22:45.616 13:46:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:45.616 13:46:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:45.616 13:46:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:45.616 13:46:24 -- common/autotest_common.sh@857 -- # local i 00:22:45.616 13:46:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:45.616 13:46:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:45.616 13:46:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:45.616 13:46:24 -- common/autotest_common.sh@861 -- # break 00:22:45.616 13:46:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:45.616 13:46:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:45.616 13:46:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:45.616 1+0 records in 00:22:45.616 1+0 records out 00:22:45.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417803 s, 9.8 MB/s 00:22:45.616 13:46:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.616 13:46:24 -- common/autotest_common.sh@874 -- # size=4096 00:22:45.616 13:46:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.616 13:46:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:45.616 13:46:24 -- common/autotest_common.sh@877 -- # return 0 00:22:45.616 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:45.616 13:46:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.616 13:46:24 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:45.873 13:46:25 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:45.873 13:46:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.873 13:46:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:45.873 13:46:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:45.873 13:46:25 -- bdev/nbd_common.sh@51 -- # local i 00:22:45.873 13:46:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:45.873 13:46:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@41 -- # break 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@45 -- # return 0 00:22:46.132 13:46:25 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:46.132 13:46:25 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:46.132 13:46:25 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@12 -- # local i 00:22:46.132 13:46:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:46.133 13:46:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.133 13:46:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:46.391 /dev/nbd1 00:22:46.391 13:46:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:46.391 13:46:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:46.391 13:46:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:46.391 13:46:25 -- common/autotest_common.sh@857 -- # local i 00:22:46.391 13:46:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:46.391 13:46:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:46.391 13:46:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:46.391 13:46:25 -- common/autotest_common.sh@861 -- # break 00:22:46.391 13:46:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:46.391 13:46:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:46.391 13:46:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:46.391 1+0 records in 00:22:46.391 1+0 records out 00:22:46.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334586 s, 12.2 MB/s 00:22:46.391 13:46:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:46.391 13:46:25 -- common/autotest_common.sh@874 -- # size=4096 00:22:46.391 13:46:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:46.391 13:46:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:46.391 13:46:25 -- common/autotest_common.sh@877 -- # return 0 00:22:46.391 13:46:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:46.391 13:46:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.391 13:46:25 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:46.648 13:46:25 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:46.648 13:46:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:46.648 13:46:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:46.648 13:46:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:46.648 13:46:25 -- bdev/nbd_common.sh@51 -- # local i 00:22:46.648 13:46:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:46.648 13:46:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@41 -- # break 00:22:46.905 13:46:26 -- bdev/nbd_common.sh@45 -- # return 0 00:22:46.905 13:46:26 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:46.906 13:46:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:46.906 13:46:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:46.906 13:46:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:46.906 13:46:26 -- bdev/nbd_common.sh@51 -- # local i 00:22:46.906 13:46:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:46.906 13:46:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@41 -- # break 00:22:47.164 13:46:26 -- bdev/nbd_common.sh@45 -- # return 0 00:22:47.164 13:46:26 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:47.164 13:46:26 -- bdev/bdev_raid.sh@709 -- # killprocess 129262 00:22:47.164 13:46:26 -- common/autotest_common.sh@926 -- # '[' -z 129262 ']' 00:22:47.164 13:46:26 -- common/autotest_common.sh@930 -- # kill -0 129262 00:22:47.164 13:46:26 -- common/autotest_common.sh@931 -- # uname 00:22:47.164 13:46:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:47.164 13:46:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129262 00:22:47.164 13:46:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:47.164 killing process with pid 129262 00:22:47.164 13:46:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:47.164 13:46:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129262' 00:22:47.164 13:46:26 -- common/autotest_common.sh@945 -- # kill 129262 00:22:47.164 Received shutdown signal, test time was about 13.387444 seconds 00:22:47.164 00:22:47.164 Latency(us) 00:22:47.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.165 =================================================================================================================== 00:22:47.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.165 13:46:26 -- common/autotest_common.sh@950 -- # wait 129262 00:22:47.165 [2024-07-10 13:46:26.505336] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:47.730 [2024-07-10 13:46:26.967175] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:49.626 13:46:28 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:49.626 00:22:49.626 real 0m19.238s 00:22:49.626 user 0m29.119s 00:22:49.626 sys 0m2.251s 00:22:49.626 13:46:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.627 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 ************************************ 00:22:49.627 END TEST raid_rebuild_test_io 00:22:49.627 ************************************ 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:49.627 13:46:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:49.627 13:46:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:49.627 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 ************************************ 00:22:49.627 START TEST raid_rebuild_test_sb_io 00:22:49.627 ************************************ 00:22:49.627 13:46:28 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@544 -- # raid_pid=129821 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129821 /var/tmp/spdk-raid.sock 00:22:49.627 13:46:28 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:49.627 13:46:28 -- common/autotest_common.sh@819 -- # '[' -z 129821 ']' 00:22:49.627 13:46:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:49.627 13:46:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.627 13:46:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:49.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:49.627 13:46:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.627 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:22:49.627 [2024-07-10 13:46:28.596679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:49.627 [2024-07-10 13:46:28.596822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129821 ] 00:22:49.627 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:49.627 Zero copy mechanism will not be used. 00:22:49.627 [2024-07-10 13:46:28.755022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.627 [2024-07-10 13:46:28.971920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.885 [2024-07-10 13:46:29.181696] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:50.142 13:46:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.142 13:46:29 -- common/autotest_common.sh@852 -- # return 0 00:22:50.142 13:46:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:50.142 13:46:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:50.142 13:46:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:50.400 BaseBdev1_malloc 00:22:50.400 13:46:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:50.658 [2024-07-10 13:46:29.865893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:50.658 [2024-07-10 13:46:29.866013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.658 [2024-07-10 13:46:29.866044] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:50.658 [2024-07-10 13:46:29.866083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.658 [2024-07-10 13:46:29.868364] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.658 [2024-07-10 13:46:29.868417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:50.658 BaseBdev1 00:22:50.658 13:46:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:50.658 13:46:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:50.658 13:46:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:50.916 BaseBdev2_malloc 00:22:50.916 13:46:30 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:51.175 [2024-07-10 13:46:30.356243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:51.175 [2024-07-10 13:46:30.356329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.175 [2024-07-10 13:46:30.356362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:51.175 [2024-07-10 13:46:30.356404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.175 [2024-07-10 13:46:30.358462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.175 [2024-07-10 13:46:30.358511] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:51.175 BaseBdev2 00:22:51.175 13:46:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:51.175 13:46:30 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:51.175 13:46:30 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:51.452 BaseBdev3_malloc 00:22:51.452 13:46:30 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:51.732 [2024-07-10 13:46:30.800220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:51.732 [2024-07-10 13:46:30.800310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.732 [2024-07-10 13:46:30.800348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:51.732 [2024-07-10 13:46:30.800384] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.732 [2024-07-10 13:46:30.802546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.732 [2024-07-10 13:46:30.802609] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:51.732 BaseBdev3 00:22:51.732 13:46:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:51.732 13:46:30 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:51.732 13:46:30 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:51.732 BaseBdev4_malloc 00:22:51.732 13:46:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:51.989 [2024-07-10 13:46:31.267125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:51.989 [2024-07-10 13:46:31.267236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.989 [2024-07-10 13:46:31.267270] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:51.989 [2024-07-10 13:46:31.267312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.989 [2024-07-10 13:46:31.269529] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.989 [2024-07-10 13:46:31.269584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:51.989 BaseBdev4 00:22:51.989 13:46:31 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:52.247 spare_malloc 00:22:52.247 13:46:31 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:52.506 spare_delay 00:22:52.506 13:46:31 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:52.765 [2024-07-10 13:46:31.904939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:52.765 [2024-07-10 13:46:31.905030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.765 [2024-07-10 13:46:31.905058] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:52.765 [2024-07-10 13:46:31.905100] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.765 [2024-07-10 13:46:31.907272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.765 [2024-07-10 13:46:31.907335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:52.765 spare 00:22:52.765 13:46:31 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:52.765 [2024-07-10 13:46:32.100743] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:52.765 [2024-07-10 13:46:32.102650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:52.765 [2024-07-10 13:46:32.102740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:52.765 [2024-07-10 13:46:32.102790] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:52.765 [2024-07-10 13:46:32.103032] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:52.765 [2024-07-10 13:46:32.103051] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:52.765 [2024-07-10 13:46:32.103193] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:52.765 [2024-07-10 13:46:32.103566] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:52.765 [2024-07-10 13:46:32.103589] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:52.765 [2024-07-10 13:46:32.103767] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.765 13:46:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.023 13:46:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.023 "name": "raid_bdev1", 00:22:53.023 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:22:53.023 "strip_size_kb": 0, 00:22:53.023 "state": "online", 00:22:53.023 "raid_level": "raid1", 00:22:53.023 "superblock": true, 00:22:53.023 "num_base_bdevs": 4, 00:22:53.023 "num_base_bdevs_discovered": 4, 00:22:53.023 "num_base_bdevs_operational": 4, 00:22:53.023 "base_bdevs_list": [ 00:22:53.023 { 00:22:53.023 "name": "BaseBdev1", 00:22:53.023 "uuid": "5ca93039-74e8-50de-8e52-a7712bd8a839", 00:22:53.023 "is_configured": true, 00:22:53.023 "data_offset": 2048, 00:22:53.023 "data_size": 63488 00:22:53.023 }, 00:22:53.023 { 00:22:53.023 "name": "BaseBdev2", 00:22:53.023 "uuid": "4254639d-5483-5b15-8ae1-8db44a41d590", 00:22:53.023 "is_configured": true, 00:22:53.023 "data_offset": 2048, 00:22:53.023 "data_size": 63488 00:22:53.023 }, 00:22:53.023 { 00:22:53.023 "name": "BaseBdev3", 00:22:53.023 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:22:53.023 "is_configured": true, 00:22:53.023 "data_offset": 2048, 00:22:53.023 "data_size": 63488 00:22:53.023 }, 00:22:53.023 { 00:22:53.023 "name": "BaseBdev4", 00:22:53.024 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:22:53.024 "is_configured": true, 00:22:53.024 "data_offset": 2048, 00:22:53.024 "data_size": 63488 00:22:53.024 } 00:22:53.024 ] 00:22:53.024 }' 00:22:53.024 13:46:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.024 13:46:32 -- common/autotest_common.sh@10 -- # set +x 00:22:53.957 13:46:32 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:53.957 13:46:32 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:53.957 [2024-07-10 13:46:33.171213] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.957 13:46:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:53.957 13:46:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:53.957 13:46:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.214 13:46:33 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:54.214 13:46:33 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:54.214 13:46:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:54.215 13:46:33 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:54.215 [2024-07-10 13:46:33.518364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:54.215 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:54.215 Zero copy mechanism will not be used. 00:22:54.215 Running I/O for 60 seconds... 00:22:54.473 [2024-07-10 13:46:33.620571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:54.473 [2024-07-10 13:46:33.620801] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.473 13:46:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.731 13:46:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.731 "name": "raid_bdev1", 00:22:54.731 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:22:54.731 "strip_size_kb": 0, 00:22:54.731 "state": "online", 00:22:54.731 "raid_level": "raid1", 00:22:54.731 "superblock": true, 00:22:54.731 "num_base_bdevs": 4, 00:22:54.731 "num_base_bdevs_discovered": 3, 00:22:54.731 "num_base_bdevs_operational": 3, 00:22:54.731 "base_bdevs_list": [ 00:22:54.731 { 00:22:54.731 "name": null, 00:22:54.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.731 "is_configured": false, 00:22:54.731 "data_offset": 2048, 00:22:54.731 "data_size": 63488 00:22:54.731 }, 00:22:54.731 { 00:22:54.731 "name": "BaseBdev2", 00:22:54.731 "uuid": "4254639d-5483-5b15-8ae1-8db44a41d590", 00:22:54.731 "is_configured": true, 00:22:54.731 "data_offset": 2048, 00:22:54.731 "data_size": 63488 00:22:54.731 }, 00:22:54.731 { 00:22:54.731 "name": "BaseBdev3", 00:22:54.731 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:22:54.731 "is_configured": true, 00:22:54.731 "data_offset": 2048, 00:22:54.731 "data_size": 63488 00:22:54.731 }, 00:22:54.731 { 00:22:54.731 "name": "BaseBdev4", 00:22:54.731 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:22:54.731 "is_configured": true, 00:22:54.731 "data_offset": 2048, 00:22:54.731 "data_size": 63488 00:22:54.731 } 00:22:54.731 ] 00:22:54.731 }' 00:22:54.731 13:46:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.731 13:46:33 -- common/autotest_common.sh@10 -- # set +x 00:22:55.297 13:46:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:55.556 [2024-07-10 13:46:34.735706] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:55.556 [2024-07-10 13:46:34.735773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:55.556 13:46:34 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:55.556 [2024-07-10 13:46:34.822609] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:55.556 [2024-07-10 13:46:34.824719] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:55.814 [2024-07-10 13:46:34.953575] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:55.814 [2024-07-10 13:46:34.955174] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:55.814 [2024-07-10 13:46:35.169065] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:55.814 [2024-07-10 13:46:35.169442] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:56.404 [2024-07-10 13:46:35.448950] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:56.404 [2024-07-10 13:46:35.450511] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:56.404 [2024-07-10 13:46:35.689423] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:56.404 [2024-07-10 13:46:35.689803] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.663 13:46:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.922 [2024-07-10 13:46:36.064302] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:56.922 13:46:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.922 "name": "raid_bdev1", 00:22:56.922 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:22:56.922 "strip_size_kb": 0, 00:22:56.922 "state": "online", 00:22:56.922 "raid_level": "raid1", 00:22:56.922 "superblock": true, 00:22:56.922 "num_base_bdevs": 4, 00:22:56.922 "num_base_bdevs_discovered": 4, 00:22:56.922 "num_base_bdevs_operational": 4, 00:22:56.922 "process": { 00:22:56.922 "type": "rebuild", 00:22:56.922 "target": "spare", 00:22:56.922 "progress": { 00:22:56.922 "blocks": 12288, 00:22:56.922 "percent": 19 00:22:56.922 } 00:22:56.922 }, 00:22:56.922 "base_bdevs_list": [ 00:22:56.922 { 00:22:56.922 "name": "spare", 00:22:56.922 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:22:56.922 "is_configured": true, 00:22:56.922 "data_offset": 2048, 00:22:56.922 "data_size": 63488 00:22:56.922 }, 00:22:56.922 { 00:22:56.922 "name": "BaseBdev2", 00:22:56.922 "uuid": "4254639d-5483-5b15-8ae1-8db44a41d590", 00:22:56.922 "is_configured": true, 00:22:56.922 "data_offset": 2048, 00:22:56.922 "data_size": 63488 00:22:56.922 }, 00:22:56.922 { 00:22:56.922 "name": "BaseBdev3", 00:22:56.922 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:22:56.922 "is_configured": true, 00:22:56.922 "data_offset": 2048, 00:22:56.922 "data_size": 63488 00:22:56.922 }, 00:22:56.922 { 00:22:56.922 "name": "BaseBdev4", 00:22:56.922 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:22:56.922 "is_configured": true, 00:22:56.922 "data_offset": 2048, 00:22:56.922 "data_size": 63488 00:22:56.922 } 00:22:56.922 ] 00:22:56.922 }' 00:22:56.922 13:46:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:56.922 13:46:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.922 13:46:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.922 13:46:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.922 13:46:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:57.181 [2024-07-10 13:46:36.290402] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:57.181 [2024-07-10 13:46:36.414664] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.439 [2024-07-10 13:46:36.632790] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:57.439 [2024-07-10 13:46:36.638946] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.439 [2024-07-10 13:46:36.665153] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.439 13:46:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.697 13:46:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.697 "name": "raid_bdev1", 00:22:57.697 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:22:57.697 "strip_size_kb": 0, 00:22:57.697 "state": "online", 00:22:57.697 "raid_level": "raid1", 00:22:57.697 "superblock": true, 00:22:57.697 "num_base_bdevs": 4, 00:22:57.697 "num_base_bdevs_discovered": 3, 00:22:57.697 "num_base_bdevs_operational": 3, 00:22:57.697 "base_bdevs_list": [ 00:22:57.697 { 00:22:57.697 "name": null, 00:22:57.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.697 "is_configured": false, 00:22:57.697 "data_offset": 2048, 00:22:57.697 "data_size": 63488 00:22:57.697 }, 00:22:57.697 { 00:22:57.697 "name": "BaseBdev2", 00:22:57.697 "uuid": "4254639d-5483-5b15-8ae1-8db44a41d590", 00:22:57.697 "is_configured": true, 00:22:57.697 "data_offset": 2048, 00:22:57.697 "data_size": 63488 00:22:57.697 }, 00:22:57.697 { 00:22:57.697 "name": "BaseBdev3", 00:22:57.697 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:22:57.697 "is_configured": true, 00:22:57.697 "data_offset": 2048, 00:22:57.697 "data_size": 63488 00:22:57.697 }, 00:22:57.697 { 00:22:57.697 "name": "BaseBdev4", 00:22:57.697 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:22:57.697 "is_configured": true, 00:22:57.697 "data_offset": 2048, 00:22:57.697 "data_size": 63488 00:22:57.697 } 00:22:57.697 ] 00:22:57.697 }' 00:22:57.697 13:46:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.697 13:46:36 -- common/autotest_common.sh@10 -- # set +x 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.635 "name": "raid_bdev1", 00:22:58.635 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:22:58.635 "strip_size_kb": 0, 00:22:58.635 "state": "online", 00:22:58.635 "raid_level": "raid1", 00:22:58.635 "superblock": true, 00:22:58.635 "num_base_bdevs": 4, 00:22:58.635 "num_base_bdevs_discovered": 3, 00:22:58.635 "num_base_bdevs_operational": 3, 00:22:58.635 "base_bdevs_list": [ 00:22:58.635 { 00:22:58.635 "name": null, 00:22:58.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.635 "is_configured": false, 00:22:58.635 "data_offset": 2048, 00:22:58.635 "data_size": 63488 00:22:58.635 }, 00:22:58.635 { 00:22:58.635 "name": "BaseBdev2", 00:22:58.635 "uuid": "4254639d-5483-5b15-8ae1-8db44a41d590", 00:22:58.635 "is_configured": true, 00:22:58.635 "data_offset": 2048, 00:22:58.635 "data_size": 63488 00:22:58.635 }, 00:22:58.635 { 00:22:58.635 "name": "BaseBdev3", 00:22:58.635 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:22:58.635 "is_configured": true, 00:22:58.635 "data_offset": 2048, 00:22:58.635 "data_size": 63488 00:22:58.635 }, 00:22:58.635 { 00:22:58.635 "name": "BaseBdev4", 00:22:58.635 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:22:58.635 "is_configured": true, 00:22:58.635 "data_offset": 2048, 00:22:58.635 "data_size": 63488 00:22:58.635 } 00:22:58.635 ] 00:22:58.635 }' 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:58.635 13:46:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.894 13:46:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:58.894 13:46:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:58.894 [2024-07-10 13:46:38.219214] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:58.894 [2024-07-10 13:46:38.219282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.152 13:46:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:59.152 [2024-07-10 13:46:38.308567] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:59.152 [2024-07-10 13:46:38.310549] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:59.152 [2024-07-10 13:46:38.431561] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:59.152 [2024-07-10 13:46:38.433177] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:59.410 [2024-07-10 13:46:38.683235] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:59.410 [2024-07-10 13:46:38.683616] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:59.668 [2024-07-10 13:46:39.008654] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:59.668 [2024-07-10 13:46:39.009292] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:59.926 [2024-07-10 13:46:39.228787] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:59.926 [2024-07-10 13:46:39.229163] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.183 13:46:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:00.183 "name": "raid_bdev1", 00:23:00.183 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:00.183 "strip_size_kb": 0, 00:23:00.183 "state": "online", 00:23:00.183 "raid_level": "raid1", 00:23:00.183 "superblock": true, 00:23:00.183 "num_base_bdevs": 4, 00:23:00.183 "num_base_bdevs_discovered": 4, 00:23:00.183 "num_base_bdevs_operational": 4, 00:23:00.183 "process": { 00:23:00.183 "type": "rebuild", 00:23:00.183 "target": "spare", 00:23:00.183 "progress": { 00:23:00.183 "blocks": 12288, 00:23:00.183 "percent": 19 00:23:00.183 } 00:23:00.183 }, 00:23:00.183 "base_bdevs_list": [ 00:23:00.183 { 00:23:00.183 "name": "spare", 00:23:00.183 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:00.183 "is_configured": true, 00:23:00.183 "data_offset": 2048, 00:23:00.183 "data_size": 63488 00:23:00.183 }, 00:23:00.183 { 00:23:00.183 "name": "BaseBdev2", 00:23:00.183 "uuid": "4254639d-5483-5b15-8ae1-8db44a41d590", 00:23:00.183 "is_configured": true, 00:23:00.183 "data_offset": 2048, 00:23:00.183 "data_size": 63488 00:23:00.183 }, 00:23:00.183 { 00:23:00.183 "name": "BaseBdev3", 00:23:00.183 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:00.183 "is_configured": true, 00:23:00.183 "data_offset": 2048, 00:23:00.183 "data_size": 63488 00:23:00.183 }, 00:23:00.183 { 00:23:00.183 "name": "BaseBdev4", 00:23:00.183 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:00.184 "is_configured": true, 00:23:00.184 "data_offset": 2048, 00:23:00.184 "data_size": 63488 00:23:00.184 } 00:23:00.184 ] 00:23:00.184 }' 00:23:00.184 13:46:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:00.441 [2024-07-10 13:46:39.598127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:00.441 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:00.441 13:46:39 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:00.700 [2024-07-10 13:46:39.853239] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:00.700 [2024-07-10 13:46:39.933610] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:00.700 [2024-07-10 13:46:39.935192] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:00.700 [2024-07-10 13:46:40.053467] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:23:00.700 [2024-07-10 13:46:40.053536] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.958 13:46:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.216 "name": "raid_bdev1", 00:23:01.216 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:01.216 "strip_size_kb": 0, 00:23:01.216 "state": "online", 00:23:01.216 "raid_level": "raid1", 00:23:01.216 "superblock": true, 00:23:01.216 "num_base_bdevs": 4, 00:23:01.216 "num_base_bdevs_discovered": 3, 00:23:01.216 "num_base_bdevs_operational": 3, 00:23:01.216 "process": { 00:23:01.216 "type": "rebuild", 00:23:01.216 "target": "spare", 00:23:01.216 "progress": { 00:23:01.216 "blocks": 26624, 00:23:01.216 "percent": 41 00:23:01.216 } 00:23:01.216 }, 00:23:01.216 "base_bdevs_list": [ 00:23:01.216 { 00:23:01.216 "name": "spare", 00:23:01.216 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:01.216 "is_configured": true, 00:23:01.216 "data_offset": 2048, 00:23:01.216 "data_size": 63488 00:23:01.216 }, 00:23:01.216 { 00:23:01.216 "name": null, 00:23:01.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.216 "is_configured": false, 00:23:01.216 "data_offset": 2048, 00:23:01.216 "data_size": 63488 00:23:01.216 }, 00:23:01.216 { 00:23:01.216 "name": "BaseBdev3", 00:23:01.216 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:01.216 "is_configured": true, 00:23:01.216 "data_offset": 2048, 00:23:01.216 "data_size": 63488 00:23:01.216 }, 00:23:01.216 { 00:23:01.216 "name": "BaseBdev4", 00:23:01.216 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:01.216 "is_configured": true, 00:23:01.216 "data_offset": 2048, 00:23:01.216 "data_size": 63488 00:23:01.216 } 00:23:01.216 ] 00:23:01.216 }' 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@657 -- # local timeout=515 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.216 13:46:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.475 [2024-07-10 13:46:40.769873] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:01.475 13:46:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.475 "name": "raid_bdev1", 00:23:01.475 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:01.475 "strip_size_kb": 0, 00:23:01.475 "state": "online", 00:23:01.475 "raid_level": "raid1", 00:23:01.475 "superblock": true, 00:23:01.475 "num_base_bdevs": 4, 00:23:01.475 "num_base_bdevs_discovered": 3, 00:23:01.475 "num_base_bdevs_operational": 3, 00:23:01.475 "process": { 00:23:01.475 "type": "rebuild", 00:23:01.475 "target": "spare", 00:23:01.475 "progress": { 00:23:01.475 "blocks": 30720, 00:23:01.475 "percent": 48 00:23:01.475 } 00:23:01.475 }, 00:23:01.475 "base_bdevs_list": [ 00:23:01.475 { 00:23:01.475 "name": "spare", 00:23:01.475 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:01.475 "is_configured": true, 00:23:01.475 "data_offset": 2048, 00:23:01.475 "data_size": 63488 00:23:01.475 }, 00:23:01.475 { 00:23:01.475 "name": null, 00:23:01.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.475 "is_configured": false, 00:23:01.475 "data_offset": 2048, 00:23:01.475 "data_size": 63488 00:23:01.475 }, 00:23:01.475 { 00:23:01.475 "name": "BaseBdev3", 00:23:01.475 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:01.475 "is_configured": true, 00:23:01.475 "data_offset": 2048, 00:23:01.475 "data_size": 63488 00:23:01.475 }, 00:23:01.475 { 00:23:01.475 "name": "BaseBdev4", 00:23:01.475 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:01.475 "is_configured": true, 00:23:01.475 "data_offset": 2048, 00:23:01.475 "data_size": 63488 00:23:01.475 } 00:23:01.475 ] 00:23:01.475 }' 00:23:01.475 13:46:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:01.734 13:46:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.734 13:46:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:01.734 13:46:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.734 13:46:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:01.734 [2024-07-10 13:46:40.920001] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:01.993 [2024-07-10 13:46:41.167711] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:01.993 [2024-07-10 13:46:41.305310] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:02.251 [2024-07-10 13:46:41.553028] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:02.251 [2024-07-10 13:46:41.553648] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:02.509 [2024-07-10 13:46:41.698969] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.767 13:46:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.026 13:46:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:03.026 "name": "raid_bdev1", 00:23:03.026 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:03.026 "strip_size_kb": 0, 00:23:03.026 "state": "online", 00:23:03.026 "raid_level": "raid1", 00:23:03.026 "superblock": true, 00:23:03.026 "num_base_bdevs": 4, 00:23:03.026 "num_base_bdevs_discovered": 3, 00:23:03.026 "num_base_bdevs_operational": 3, 00:23:03.026 "process": { 00:23:03.026 "type": "rebuild", 00:23:03.026 "target": "spare", 00:23:03.026 "progress": { 00:23:03.026 "blocks": 51200, 00:23:03.026 "percent": 80 00:23:03.026 } 00:23:03.026 }, 00:23:03.026 "base_bdevs_list": [ 00:23:03.026 { 00:23:03.026 "name": "spare", 00:23:03.026 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:03.026 "is_configured": true, 00:23:03.026 "data_offset": 2048, 00:23:03.026 "data_size": 63488 00:23:03.026 }, 00:23:03.026 { 00:23:03.026 "name": null, 00:23:03.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.026 "is_configured": false, 00:23:03.026 "data_offset": 2048, 00:23:03.026 "data_size": 63488 00:23:03.026 }, 00:23:03.026 { 00:23:03.026 "name": "BaseBdev3", 00:23:03.026 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:03.026 "is_configured": true, 00:23:03.026 "data_offset": 2048, 00:23:03.026 "data_size": 63488 00:23:03.026 }, 00:23:03.026 { 00:23:03.026 "name": "BaseBdev4", 00:23:03.026 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:03.026 "is_configured": true, 00:23:03.026 "data_offset": 2048, 00:23:03.026 "data_size": 63488 00:23:03.026 } 00:23:03.026 ] 00:23:03.026 }' 00:23:03.026 13:46:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:03.026 13:46:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.026 13:46:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:03.026 13:46:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.026 13:46:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:03.592 [2024-07-10 13:46:42.726664] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:03.592 [2024-07-10 13:46:42.826439] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:03.592 [2024-07-10 13:46:42.830369] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.234 13:46:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:04.234 "name": "raid_bdev1", 00:23:04.234 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:04.234 "strip_size_kb": 0, 00:23:04.234 "state": "online", 00:23:04.234 "raid_level": "raid1", 00:23:04.234 "superblock": true, 00:23:04.234 "num_base_bdevs": 4, 00:23:04.234 "num_base_bdevs_discovered": 3, 00:23:04.234 "num_base_bdevs_operational": 3, 00:23:04.234 "base_bdevs_list": [ 00:23:04.234 { 00:23:04.234 "name": "spare", 00:23:04.234 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:04.234 "is_configured": true, 00:23:04.234 "data_offset": 2048, 00:23:04.234 "data_size": 63488 00:23:04.234 }, 00:23:04.234 { 00:23:04.234 "name": null, 00:23:04.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.234 "is_configured": false, 00:23:04.234 "data_offset": 2048, 00:23:04.234 "data_size": 63488 00:23:04.234 }, 00:23:04.234 { 00:23:04.234 "name": "BaseBdev3", 00:23:04.234 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:04.234 "is_configured": true, 00:23:04.234 "data_offset": 2048, 00:23:04.234 "data_size": 63488 00:23:04.234 }, 00:23:04.234 { 00:23:04.234 "name": "BaseBdev4", 00:23:04.234 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:04.234 "is_configured": true, 00:23:04.234 "data_offset": 2048, 00:23:04.234 "data_size": 63488 00:23:04.234 } 00:23:04.234 ] 00:23:04.234 }' 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@660 -- # break 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.235 13:46:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.492 13:46:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:04.492 "name": "raid_bdev1", 00:23:04.492 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:04.492 "strip_size_kb": 0, 00:23:04.492 "state": "online", 00:23:04.492 "raid_level": "raid1", 00:23:04.492 "superblock": true, 00:23:04.492 "num_base_bdevs": 4, 00:23:04.492 "num_base_bdevs_discovered": 3, 00:23:04.492 "num_base_bdevs_operational": 3, 00:23:04.492 "base_bdevs_list": [ 00:23:04.492 { 00:23:04.492 "name": "spare", 00:23:04.492 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:04.492 "is_configured": true, 00:23:04.492 "data_offset": 2048, 00:23:04.492 "data_size": 63488 00:23:04.492 }, 00:23:04.492 { 00:23:04.492 "name": null, 00:23:04.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.492 "is_configured": false, 00:23:04.492 "data_offset": 2048, 00:23:04.492 "data_size": 63488 00:23:04.492 }, 00:23:04.492 { 00:23:04.492 "name": "BaseBdev3", 00:23:04.492 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:04.492 "is_configured": true, 00:23:04.492 "data_offset": 2048, 00:23:04.492 "data_size": 63488 00:23:04.492 }, 00:23:04.492 { 00:23:04.492 "name": "BaseBdev4", 00:23:04.492 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:04.492 "is_configured": true, 00:23:04.492 "data_offset": 2048, 00:23:04.492 "data_size": 63488 00:23:04.492 } 00:23:04.492 ] 00:23:04.492 }' 00:23:04.492 13:46:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.492 13:46:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.492 13:46:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.750 13:46:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.750 13:46:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.750 "name": "raid_bdev1", 00:23:04.750 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:04.750 "strip_size_kb": 0, 00:23:04.751 "state": "online", 00:23:04.751 "raid_level": "raid1", 00:23:04.751 "superblock": true, 00:23:04.751 "num_base_bdevs": 4, 00:23:04.751 "num_base_bdevs_discovered": 3, 00:23:04.751 "num_base_bdevs_operational": 3, 00:23:04.751 "base_bdevs_list": [ 00:23:04.751 { 00:23:04.751 "name": "spare", 00:23:04.751 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:04.751 "is_configured": true, 00:23:04.751 "data_offset": 2048, 00:23:04.751 "data_size": 63488 00:23:04.751 }, 00:23:04.751 { 00:23:04.751 "name": null, 00:23:04.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.751 "is_configured": false, 00:23:04.751 "data_offset": 2048, 00:23:04.751 "data_size": 63488 00:23:04.751 }, 00:23:04.751 { 00:23:04.751 "name": "BaseBdev3", 00:23:04.751 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:04.751 "is_configured": true, 00:23:04.751 "data_offset": 2048, 00:23:04.751 "data_size": 63488 00:23:04.751 }, 00:23:04.751 { 00:23:04.751 "name": "BaseBdev4", 00:23:04.751 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:04.751 "is_configured": true, 00:23:04.751 "data_offset": 2048, 00:23:04.751 "data_size": 63488 00:23:04.751 } 00:23:04.751 ] 00:23:04.751 }' 00:23:04.751 13:46:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.751 13:46:44 -- common/autotest_common.sh@10 -- # set +x 00:23:05.701 13:46:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:05.701 [2024-07-10 13:46:44.980850] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.701 [2024-07-10 13:46:44.980901] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.701 00:23:05.701 Latency(us) 00:23:05.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.701 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:05.701 raid_bdev1 : 11.54 90.65 271.96 0.00 0.00 15948.65 565.21 122715.44 00:23:05.701 =================================================================================================================== 00:23:05.701 Total : 90.65 271.96 0.00 0.00 15948.65 565.21 122715.44 00:23:05.959 0 00:23:05.959 [2024-07-10 13:46:45.064354] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.959 [2024-07-10 13:46:45.064408] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.959 [2024-07-10 13:46:45.064520] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.959 [2024-07-10 13:46:45.064531] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:05.959 13:46:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.959 13:46:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:06.218 13:46:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:06.218 13:46:45 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:06.218 13:46:45 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@12 -- # local i 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:06.218 13:46:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:06.478 /dev/nbd0 00:23:06.478 13:46:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:06.478 13:46:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:06.478 13:46:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:06.478 13:46:45 -- common/autotest_common.sh@857 -- # local i 00:23:06.478 13:46:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:06.478 13:46:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:06.478 13:46:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:06.478 13:46:45 -- common/autotest_common.sh@861 -- # break 00:23:06.478 13:46:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:06.478 13:46:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:06.478 13:46:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.478 1+0 records in 00:23:06.478 1+0 records out 00:23:06.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344401 s, 11.9 MB/s 00:23:06.478 13:46:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.478 13:46:45 -- common/autotest_common.sh@874 -- # size=4096 00:23:06.479 13:46:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.479 13:46:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:06.479 13:46:45 -- common/autotest_common.sh@877 -- # return 0 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:06.479 13:46:45 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:06.479 13:46:45 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:06.479 13:46:45 -- bdev/bdev_raid.sh@678 -- # continue 00:23:06.479 13:46:45 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:06.479 13:46:45 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:06.479 13:46:45 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@12 -- # local i 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:06.479 13:46:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:06.737 /dev/nbd1 00:23:06.737 13:46:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:06.737 13:46:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:06.737 13:46:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:06.737 13:46:45 -- common/autotest_common.sh@857 -- # local i 00:23:06.737 13:46:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:06.737 13:46:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:06.737 13:46:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:06.737 13:46:45 -- common/autotest_common.sh@861 -- # break 00:23:06.737 13:46:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:06.737 13:46:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:06.737 13:46:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.737 1+0 records in 00:23:06.737 1+0 records out 00:23:06.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429292 s, 9.5 MB/s 00:23:06.737 13:46:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.737 13:46:45 -- common/autotest_common.sh@874 -- # size=4096 00:23:06.737 13:46:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.737 13:46:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:06.737 13:46:45 -- common/autotest_common.sh@877 -- # return 0 00:23:06.737 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.737 13:46:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:06.737 13:46:45 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:06.995 13:46:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:06.995 13:46:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:06.995 13:46:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:06.995 13:46:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:06.995 13:46:46 -- bdev/nbd_common.sh@51 -- # local i 00:23:06.995 13:46:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.995 13:46:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@41 -- # break 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.254 13:46:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:07.254 13:46:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:07.254 13:46:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@12 -- # local i 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:07.254 13:46:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:07.511 /dev/nbd1 00:23:07.511 13:46:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:07.511 13:46:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:07.511 13:46:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:07.511 13:46:46 -- common/autotest_common.sh@857 -- # local i 00:23:07.511 13:46:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:07.511 13:46:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:07.511 13:46:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:07.511 13:46:46 -- common/autotest_common.sh@861 -- # break 00:23:07.511 13:46:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:07.511 13:46:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:07.511 13:46:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:07.511 1+0 records in 00:23:07.511 1+0 records out 00:23:07.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343986 s, 11.9 MB/s 00:23:07.511 13:46:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.512 13:46:46 -- common/autotest_common.sh@874 -- # size=4096 00:23:07.512 13:46:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.512 13:46:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:07.512 13:46:46 -- common/autotest_common.sh@877 -- # return 0 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:07.512 13:46:46 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:07.512 13:46:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@51 -- # local i 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.512 13:46:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@41 -- # break 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.769 13:46:47 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@51 -- # local i 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.769 13:46:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@41 -- # break 00:23:08.027 13:46:47 -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.027 13:46:47 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:08.027 13:46:47 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:08.027 13:46:47 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:08.027 13:46:47 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:08.291 13:46:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:08.548 [2024-07-10 13:46:47.695452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:08.548 [2024-07-10 13:46:47.695556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.548 [2024-07-10 13:46:47.695597] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:08.548 [2024-07-10 13:46:47.695615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.548 [2024-07-10 13:46:47.697854] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.548 [2024-07-10 13:46:47.697926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:08.548 [2024-07-10 13:46:47.698050] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:08.548 [2024-07-10 13:46:47.698120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.548 BaseBdev1 00:23:08.548 13:46:47 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:08.548 13:46:47 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:08.548 13:46:47 -- bdev/bdev_raid.sh@696 -- # continue 00:23:08.548 13:46:47 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:08.548 13:46:47 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:08.548 13:46:47 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:08.807 13:46:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:09.064 [2024-07-10 13:46:48.198648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:09.064 [2024-07-10 13:46:48.198740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.064 [2024-07-10 13:46:48.198777] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:09.065 [2024-07-10 13:46:48.198794] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.065 [2024-07-10 13:46:48.199237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.065 [2024-07-10 13:46:48.199294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:09.065 [2024-07-10 13:46:48.199390] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:09.065 [2024-07-10 13:46:48.199407] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:09.065 [2024-07-10 13:46:48.199414] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:09.065 [2024-07-10 13:46:48.199442] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:23:09.065 [2024-07-10 13:46:48.199552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.065 BaseBdev3 00:23:09.065 13:46:48 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:09.065 13:46:48 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:09.065 13:46:48 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:09.323 13:46:48 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:09.323 [2024-07-10 13:46:48.653891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:09.323 [2024-07-10 13:46:48.654214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.323 [2024-07-10 13:46:48.654330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:09.323 [2024-07-10 13:46:48.654422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.323 [2024-07-10 13:46:48.654948] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.323 [2024-07-10 13:46:48.655101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:09.323 [2024-07-10 13:46:48.655298] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:09.323 [2024-07-10 13:46:48.655342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:09.323 BaseBdev4 00:23:09.323 13:46:48 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:09.581 13:46:48 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:09.839 [2024-07-10 13:46:49.127476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:09.839 [2024-07-10 13:46:49.127907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.839 [2024-07-10 13:46:49.128100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:23:09.839 [2024-07-10 13:46:49.128203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.839 [2024-07-10 13:46:49.128783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.839 [2024-07-10 13:46:49.128947] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:09.839 [2024-07-10 13:46:49.129158] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:09.839 [2024-07-10 13:46:49.129193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:09.839 spare 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.839 13:46:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.097 [2024-07-10 13:46:49.229116] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:23:10.097 [2024-07-10 13:46:49.229155] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:10.097 [2024-07-10 13:46:49.229341] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000396c0 00:23:10.097 [2024-07-10 13:46:49.229732] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:23:10.097 [2024-07-10 13:46:49.229753] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:23:10.097 [2024-07-10 13:46:49.229910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.097 13:46:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.097 "name": "raid_bdev1", 00:23:10.097 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:10.097 "strip_size_kb": 0, 00:23:10.097 "state": "online", 00:23:10.097 "raid_level": "raid1", 00:23:10.097 "superblock": true, 00:23:10.097 "num_base_bdevs": 4, 00:23:10.097 "num_base_bdevs_discovered": 3, 00:23:10.097 "num_base_bdevs_operational": 3, 00:23:10.097 "base_bdevs_list": [ 00:23:10.097 { 00:23:10.097 "name": "spare", 00:23:10.097 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:10.097 "is_configured": true, 00:23:10.097 "data_offset": 2048, 00:23:10.097 "data_size": 63488 00:23:10.097 }, 00:23:10.097 { 00:23:10.097 "name": null, 00:23:10.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.097 "is_configured": false, 00:23:10.097 "data_offset": 2048, 00:23:10.097 "data_size": 63488 00:23:10.097 }, 00:23:10.097 { 00:23:10.097 "name": "BaseBdev3", 00:23:10.097 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:10.097 "is_configured": true, 00:23:10.097 "data_offset": 2048, 00:23:10.097 "data_size": 63488 00:23:10.097 }, 00:23:10.097 { 00:23:10.097 "name": "BaseBdev4", 00:23:10.097 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:10.097 "is_configured": true, 00:23:10.097 "data_offset": 2048, 00:23:10.097 "data_size": 63488 00:23:10.097 } 00:23:10.097 ] 00:23:10.097 }' 00:23:10.097 13:46:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.097 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:11.030 "name": "raid_bdev1", 00:23:11.030 "uuid": "59ed09d4-3316-4920-9021-8cf5e4a41b63", 00:23:11.030 "strip_size_kb": 0, 00:23:11.030 "state": "online", 00:23:11.030 "raid_level": "raid1", 00:23:11.030 "superblock": true, 00:23:11.030 "num_base_bdevs": 4, 00:23:11.030 "num_base_bdevs_discovered": 3, 00:23:11.030 "num_base_bdevs_operational": 3, 00:23:11.030 "base_bdevs_list": [ 00:23:11.030 { 00:23:11.030 "name": "spare", 00:23:11.030 "uuid": "eecbcdd2-128c-5519-992a-097e6cb671b8", 00:23:11.030 "is_configured": true, 00:23:11.030 "data_offset": 2048, 00:23:11.030 "data_size": 63488 00:23:11.030 }, 00:23:11.030 { 00:23:11.030 "name": null, 00:23:11.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.030 "is_configured": false, 00:23:11.030 "data_offset": 2048, 00:23:11.030 "data_size": 63488 00:23:11.030 }, 00:23:11.030 { 00:23:11.030 "name": "BaseBdev3", 00:23:11.030 "uuid": "3ec9ed48-589c-54f8-9c7b-dbaf5be7dcb1", 00:23:11.030 "is_configured": true, 00:23:11.030 "data_offset": 2048, 00:23:11.030 "data_size": 63488 00:23:11.030 }, 00:23:11.030 { 00:23:11.030 "name": "BaseBdev4", 00:23:11.030 "uuid": "1c92deb3-0aed-5ce4-a8fd-679dcc129397", 00:23:11.030 "is_configured": true, 00:23:11.030 "data_offset": 2048, 00:23:11.030 "data_size": 63488 00:23:11.030 } 00:23:11.030 ] 00:23:11.030 }' 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:11.030 13:46:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:11.287 13:46:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:11.287 13:46:50 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.287 13:46:50 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:11.543 13:46:50 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.543 13:46:50 -- bdev/bdev_raid.sh@709 -- # killprocess 129821 00:23:11.543 13:46:50 -- common/autotest_common.sh@926 -- # '[' -z 129821 ']' 00:23:11.543 13:46:50 -- common/autotest_common.sh@930 -- # kill -0 129821 00:23:11.543 13:46:50 -- common/autotest_common.sh@931 -- # uname 00:23:11.543 13:46:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:11.543 13:46:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129821 00:23:11.543 killing process with pid 129821 00:23:11.544 Received shutdown signal, test time was about 17.181303 seconds 00:23:11.544 00:23:11.544 Latency(us) 00:23:11.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.544 =================================================================================================================== 00:23:11.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.544 13:46:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:11.544 13:46:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:11.544 13:46:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129821' 00:23:11.544 13:46:50 -- common/autotest_common.sh@945 -- # kill 129821 00:23:11.544 13:46:50 -- common/autotest_common.sh@950 -- # wait 129821 00:23:11.544 [2024-07-10 13:46:50.668687] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:11.544 [2024-07-10 13:46:50.668808] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.544 [2024-07-10 13:46:50.668915] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.544 [2024-07-10 13:46:50.668933] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:23:11.800 [2024-07-10 13:46:51.144528] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:13.698 ************************************ 00:23:13.698 END TEST raid_rebuild_test_sb_io 00:23:13.698 ************************************ 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:13.698 00:23:13.698 real 0m24.137s 00:23:13.698 user 0m38.662s 00:23:13.698 sys 0m2.639s 00:23:13.698 13:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.698 13:46:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:23:13.698 13:46:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:13.698 13:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:13.698 13:46:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 ************************************ 00:23:13.698 START TEST raid5f_state_function_test 00:23:13.698 ************************************ 00:23:13.698 13:46:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=130494 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130494' 00:23:13.698 Process raid pid: 130494 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130494 /var/tmp/spdk-raid.sock 00:23:13.698 13:46:52 -- common/autotest_common.sh@819 -- # '[' -z 130494 ']' 00:23:13.698 13:46:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:13.698 13:46:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:13.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:13.698 13:46:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:13.698 13:46:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:13.698 13:46:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 13:46:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:13.698 [2024-07-10 13:46:52.789966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:13.698 [2024-07-10 13:46:52.790202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.698 [2024-07-10 13:46:52.940886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.956 [2024-07-10 13:46:53.183574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.213 [2024-07-10 13:46:53.431706] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.471 13:46:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:14.471 13:46:53 -- common/autotest_common.sh@852 -- # return 0 00:23:14.471 13:46:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:14.729 [2024-07-10 13:46:53.992370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.729 [2024-07-10 13:46:53.992479] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.729 [2024-07-10 13:46:53.992492] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.729 [2024-07-10 13:46:53.992507] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.729 [2024-07-10 13:46:53.992513] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:14.729 [2024-07-10 13:46:53.992548] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.730 13:46:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.987 13:46:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.987 "name": "Existed_Raid", 00:23:14.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.987 "strip_size_kb": 64, 00:23:14.987 "state": "configuring", 00:23:14.987 "raid_level": "raid5f", 00:23:14.987 "superblock": false, 00:23:14.987 "num_base_bdevs": 3, 00:23:14.987 "num_base_bdevs_discovered": 0, 00:23:14.987 "num_base_bdevs_operational": 3, 00:23:14.987 "base_bdevs_list": [ 00:23:14.987 { 00:23:14.987 "name": "BaseBdev1", 00:23:14.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.987 "is_configured": false, 00:23:14.987 "data_offset": 0, 00:23:14.987 "data_size": 0 00:23:14.987 }, 00:23:14.987 { 00:23:14.987 "name": "BaseBdev2", 00:23:14.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.987 "is_configured": false, 00:23:14.987 "data_offset": 0, 00:23:14.987 "data_size": 0 00:23:14.987 }, 00:23:14.987 { 00:23:14.987 "name": "BaseBdev3", 00:23:14.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.987 "is_configured": false, 00:23:14.987 "data_offset": 0, 00:23:14.987 "data_size": 0 00:23:14.987 } 00:23:14.987 ] 00:23:14.987 }' 00:23:14.987 13:46:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.987 13:46:54 -- common/autotest_common.sh@10 -- # set +x 00:23:15.921 13:46:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:15.921 [2024-07-10 13:46:55.216249] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:15.921 [2024-07-10 13:46:55.216303] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:15.921 13:46:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:16.188 [2024-07-10 13:46:55.423990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:16.188 [2024-07-10 13:46:55.424104] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:16.188 [2024-07-10 13:46:55.424118] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.188 [2024-07-10 13:46:55.424133] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.188 [2024-07-10 13:46:55.424139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:16.188 [2024-07-10 13:46:55.424169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:16.188 13:46:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:16.463 [2024-07-10 13:46:55.678777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.463 BaseBdev1 00:23:16.463 13:46:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:16.463 13:46:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:16.463 13:46:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:16.463 13:46:55 -- common/autotest_common.sh@889 -- # local i 00:23:16.463 13:46:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:16.463 13:46:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:16.463 13:46:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:16.721 13:46:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:16.978 [ 00:23:16.978 { 00:23:16.978 "name": "BaseBdev1", 00:23:16.978 "aliases": [ 00:23:16.978 "2141f3a2-ce0c-4240-85f7-ae6b4c84c567" 00:23:16.978 ], 00:23:16.978 "product_name": "Malloc disk", 00:23:16.978 "block_size": 512, 00:23:16.978 "num_blocks": 65536, 00:23:16.978 "uuid": "2141f3a2-ce0c-4240-85f7-ae6b4c84c567", 00:23:16.978 "assigned_rate_limits": { 00:23:16.978 "rw_ios_per_sec": 0, 00:23:16.978 "rw_mbytes_per_sec": 0, 00:23:16.978 "r_mbytes_per_sec": 0, 00:23:16.978 "w_mbytes_per_sec": 0 00:23:16.978 }, 00:23:16.978 "claimed": true, 00:23:16.978 "claim_type": "exclusive_write", 00:23:16.978 "zoned": false, 00:23:16.978 "supported_io_types": { 00:23:16.978 "read": true, 00:23:16.978 "write": true, 00:23:16.978 "unmap": true, 00:23:16.978 "write_zeroes": true, 00:23:16.978 "flush": true, 00:23:16.978 "reset": true, 00:23:16.978 "compare": false, 00:23:16.978 "compare_and_write": false, 00:23:16.978 "abort": true, 00:23:16.978 "nvme_admin": false, 00:23:16.978 "nvme_io": false 00:23:16.978 }, 00:23:16.978 "memory_domains": [ 00:23:16.978 { 00:23:16.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.978 "dma_device_type": 2 00:23:16.978 } 00:23:16.978 ], 00:23:16.978 "driver_specific": {} 00:23:16.978 } 00:23:16.978 ] 00:23:16.978 13:46:56 -- common/autotest_common.sh@895 -- # return 0 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.978 13:46:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.236 13:46:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.236 "name": "Existed_Raid", 00:23:17.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.236 "strip_size_kb": 64, 00:23:17.236 "state": "configuring", 00:23:17.236 "raid_level": "raid5f", 00:23:17.236 "superblock": false, 00:23:17.236 "num_base_bdevs": 3, 00:23:17.236 "num_base_bdevs_discovered": 1, 00:23:17.236 "num_base_bdevs_operational": 3, 00:23:17.236 "base_bdevs_list": [ 00:23:17.236 { 00:23:17.236 "name": "BaseBdev1", 00:23:17.236 "uuid": "2141f3a2-ce0c-4240-85f7-ae6b4c84c567", 00:23:17.236 "is_configured": true, 00:23:17.236 "data_offset": 0, 00:23:17.236 "data_size": 65536 00:23:17.236 }, 00:23:17.236 { 00:23:17.236 "name": "BaseBdev2", 00:23:17.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.236 "is_configured": false, 00:23:17.236 "data_offset": 0, 00:23:17.236 "data_size": 0 00:23:17.236 }, 00:23:17.236 { 00:23:17.236 "name": "BaseBdev3", 00:23:17.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.236 "is_configured": false, 00:23:17.236 "data_offset": 0, 00:23:17.236 "data_size": 0 00:23:17.236 } 00:23:17.236 ] 00:23:17.236 }' 00:23:17.236 13:46:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.236 13:46:56 -- common/autotest_common.sh@10 -- # set +x 00:23:18.170 13:46:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:18.170 [2024-07-10 13:46:57.428253] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:18.170 [2024-07-10 13:46:57.428337] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:18.170 13:46:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:18.170 13:46:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:18.428 [2024-07-10 13:46:57.636339] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:18.428 [2024-07-10 13:46:57.638290] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:18.428 [2024-07-10 13:46:57.638372] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:18.428 [2024-07-10 13:46:57.638381] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:18.428 [2024-07-10 13:46:57.638406] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.428 13:46:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.685 13:46:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.685 "name": "Existed_Raid", 00:23:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.685 "strip_size_kb": 64, 00:23:18.685 "state": "configuring", 00:23:18.685 "raid_level": "raid5f", 00:23:18.685 "superblock": false, 00:23:18.685 "num_base_bdevs": 3, 00:23:18.685 "num_base_bdevs_discovered": 1, 00:23:18.685 "num_base_bdevs_operational": 3, 00:23:18.685 "base_bdevs_list": [ 00:23:18.685 { 00:23:18.685 "name": "BaseBdev1", 00:23:18.685 "uuid": "2141f3a2-ce0c-4240-85f7-ae6b4c84c567", 00:23:18.685 "is_configured": true, 00:23:18.685 "data_offset": 0, 00:23:18.685 "data_size": 65536 00:23:18.685 }, 00:23:18.685 { 00:23:18.685 "name": "BaseBdev2", 00:23:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.685 "is_configured": false, 00:23:18.685 "data_offset": 0, 00:23:18.685 "data_size": 0 00:23:18.685 }, 00:23:18.685 { 00:23:18.685 "name": "BaseBdev3", 00:23:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.685 "is_configured": false, 00:23:18.685 "data_offset": 0, 00:23:18.685 "data_size": 0 00:23:18.685 } 00:23:18.685 ] 00:23:18.685 }' 00:23:18.685 13:46:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.685 13:46:57 -- common/autotest_common.sh@10 -- # set +x 00:23:19.676 13:46:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:19.934 [2024-07-10 13:46:59.022347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.934 BaseBdev2 00:23:19.934 13:46:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:19.934 13:46:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:19.934 13:46:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:19.934 13:46:59 -- common/autotest_common.sh@889 -- # local i 00:23:19.934 13:46:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:19.934 13:46:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:19.934 13:46:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:19.934 13:46:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:20.501 [ 00:23:20.501 { 00:23:20.501 "name": "BaseBdev2", 00:23:20.501 "aliases": [ 00:23:20.501 "1709f0de-eb34-4a9f-b6e3-ec399fd4e970" 00:23:20.501 ], 00:23:20.501 "product_name": "Malloc disk", 00:23:20.501 "block_size": 512, 00:23:20.501 "num_blocks": 65536, 00:23:20.501 "uuid": "1709f0de-eb34-4a9f-b6e3-ec399fd4e970", 00:23:20.501 "assigned_rate_limits": { 00:23:20.501 "rw_ios_per_sec": 0, 00:23:20.501 "rw_mbytes_per_sec": 0, 00:23:20.501 "r_mbytes_per_sec": 0, 00:23:20.501 "w_mbytes_per_sec": 0 00:23:20.501 }, 00:23:20.501 "claimed": true, 00:23:20.501 "claim_type": "exclusive_write", 00:23:20.501 "zoned": false, 00:23:20.501 "supported_io_types": { 00:23:20.501 "read": true, 00:23:20.501 "write": true, 00:23:20.501 "unmap": true, 00:23:20.501 "write_zeroes": true, 00:23:20.501 "flush": true, 00:23:20.501 "reset": true, 00:23:20.501 "compare": false, 00:23:20.501 "compare_and_write": false, 00:23:20.501 "abort": true, 00:23:20.501 "nvme_admin": false, 00:23:20.501 "nvme_io": false 00:23:20.501 }, 00:23:20.501 "memory_domains": [ 00:23:20.501 { 00:23:20.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.501 "dma_device_type": 2 00:23:20.501 } 00:23:20.501 ], 00:23:20.501 "driver_specific": {} 00:23:20.501 } 00:23:20.501 ] 00:23:20.501 13:46:59 -- common/autotest_common.sh@895 -- # return 0 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.501 13:46:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.761 13:46:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.761 "name": "Existed_Raid", 00:23:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.761 "strip_size_kb": 64, 00:23:20.761 "state": "configuring", 00:23:20.761 "raid_level": "raid5f", 00:23:20.761 "superblock": false, 00:23:20.761 "num_base_bdevs": 3, 00:23:20.761 "num_base_bdevs_discovered": 2, 00:23:20.761 "num_base_bdevs_operational": 3, 00:23:20.761 "base_bdevs_list": [ 00:23:20.761 { 00:23:20.761 "name": "BaseBdev1", 00:23:20.761 "uuid": "2141f3a2-ce0c-4240-85f7-ae6b4c84c567", 00:23:20.761 "is_configured": true, 00:23:20.761 "data_offset": 0, 00:23:20.761 "data_size": 65536 00:23:20.761 }, 00:23:20.761 { 00:23:20.761 "name": "BaseBdev2", 00:23:20.761 "uuid": "1709f0de-eb34-4a9f-b6e3-ec399fd4e970", 00:23:20.761 "is_configured": true, 00:23:20.761 "data_offset": 0, 00:23:20.761 "data_size": 65536 00:23:20.761 }, 00:23:20.761 { 00:23:20.761 "name": "BaseBdev3", 00:23:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.761 "is_configured": false, 00:23:20.761 "data_offset": 0, 00:23:20.761 "data_size": 0 00:23:20.761 } 00:23:20.761 ] 00:23:20.761 }' 00:23:20.761 13:46:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.761 13:46:59 -- common/autotest_common.sh@10 -- # set +x 00:23:21.326 13:47:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:21.584 [2024-07-10 13:47:00.909822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.584 [2024-07-10 13:47:00.909904] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:21.584 [2024-07-10 13:47:00.909915] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:21.584 [2024-07-10 13:47:00.910047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:23:21.584 [2024-07-10 13:47:00.916846] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:21.584 [2024-07-10 13:47:00.916888] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:21.584 [2024-07-10 13:47:00.917203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.584 BaseBdev3 00:23:21.584 13:47:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:21.584 13:47:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:21.584 13:47:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:21.584 13:47:00 -- common/autotest_common.sh@889 -- # local i 00:23:21.584 13:47:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:21.584 13:47:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:21.584 13:47:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.842 13:47:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:22.100 [ 00:23:22.100 { 00:23:22.100 "name": "BaseBdev3", 00:23:22.100 "aliases": [ 00:23:22.100 "54999767-8caf-4a77-bc08-4aca68d50ee3" 00:23:22.100 ], 00:23:22.100 "product_name": "Malloc disk", 00:23:22.100 "block_size": 512, 00:23:22.100 "num_blocks": 65536, 00:23:22.100 "uuid": "54999767-8caf-4a77-bc08-4aca68d50ee3", 00:23:22.100 "assigned_rate_limits": { 00:23:22.100 "rw_ios_per_sec": 0, 00:23:22.100 "rw_mbytes_per_sec": 0, 00:23:22.100 "r_mbytes_per_sec": 0, 00:23:22.100 "w_mbytes_per_sec": 0 00:23:22.100 }, 00:23:22.100 "claimed": true, 00:23:22.100 "claim_type": "exclusive_write", 00:23:22.100 "zoned": false, 00:23:22.101 "supported_io_types": { 00:23:22.101 "read": true, 00:23:22.101 "write": true, 00:23:22.101 "unmap": true, 00:23:22.101 "write_zeroes": true, 00:23:22.101 "flush": true, 00:23:22.101 "reset": true, 00:23:22.101 "compare": false, 00:23:22.101 "compare_and_write": false, 00:23:22.101 "abort": true, 00:23:22.101 "nvme_admin": false, 00:23:22.101 "nvme_io": false 00:23:22.101 }, 00:23:22.101 "memory_domains": [ 00:23:22.101 { 00:23:22.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.101 "dma_device_type": 2 00:23:22.101 } 00:23:22.101 ], 00:23:22.101 "driver_specific": {} 00:23:22.101 } 00:23:22.101 ] 00:23:22.101 13:47:01 -- common/autotest_common.sh@895 -- # return 0 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.101 13:47:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.359 13:47:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.359 "name": "Existed_Raid", 00:23:22.359 "uuid": "fa40500e-ee0e-4f8e-afa5-70b77f6795e7", 00:23:22.359 "strip_size_kb": 64, 00:23:22.359 "state": "online", 00:23:22.359 "raid_level": "raid5f", 00:23:22.359 "superblock": false, 00:23:22.359 "num_base_bdevs": 3, 00:23:22.359 "num_base_bdevs_discovered": 3, 00:23:22.359 "num_base_bdevs_operational": 3, 00:23:22.359 "base_bdevs_list": [ 00:23:22.359 { 00:23:22.359 "name": "BaseBdev1", 00:23:22.359 "uuid": "2141f3a2-ce0c-4240-85f7-ae6b4c84c567", 00:23:22.359 "is_configured": true, 00:23:22.359 "data_offset": 0, 00:23:22.359 "data_size": 65536 00:23:22.359 }, 00:23:22.359 { 00:23:22.359 "name": "BaseBdev2", 00:23:22.359 "uuid": "1709f0de-eb34-4a9f-b6e3-ec399fd4e970", 00:23:22.359 "is_configured": true, 00:23:22.359 "data_offset": 0, 00:23:22.359 "data_size": 65536 00:23:22.359 }, 00:23:22.359 { 00:23:22.359 "name": "BaseBdev3", 00:23:22.359 "uuid": "54999767-8caf-4a77-bc08-4aca68d50ee3", 00:23:22.359 "is_configured": true, 00:23:22.359 "data_offset": 0, 00:23:22.359 "data_size": 65536 00:23:22.359 } 00:23:22.359 ] 00:23:22.359 }' 00:23:22.359 13:47:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.359 13:47:01 -- common/autotest_common.sh@10 -- # set +x 00:23:22.994 13:47:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:23.254 [2024-07-10 13:47:02.526486] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.513 13:47:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.774 13:47:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.774 "name": "Existed_Raid", 00:23:23.774 "uuid": "fa40500e-ee0e-4f8e-afa5-70b77f6795e7", 00:23:23.774 "strip_size_kb": 64, 00:23:23.774 "state": "online", 00:23:23.774 "raid_level": "raid5f", 00:23:23.774 "superblock": false, 00:23:23.774 "num_base_bdevs": 3, 00:23:23.774 "num_base_bdevs_discovered": 2, 00:23:23.774 "num_base_bdevs_operational": 2, 00:23:23.774 "base_bdevs_list": [ 00:23:23.774 { 00:23:23.774 "name": null, 00:23:23.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.774 "is_configured": false, 00:23:23.774 "data_offset": 0, 00:23:23.774 "data_size": 65536 00:23:23.774 }, 00:23:23.774 { 00:23:23.774 "name": "BaseBdev2", 00:23:23.774 "uuid": "1709f0de-eb34-4a9f-b6e3-ec399fd4e970", 00:23:23.774 "is_configured": true, 00:23:23.774 "data_offset": 0, 00:23:23.774 "data_size": 65536 00:23:23.774 }, 00:23:23.774 { 00:23:23.774 "name": "BaseBdev3", 00:23:23.774 "uuid": "54999767-8caf-4a77-bc08-4aca68d50ee3", 00:23:23.774 "is_configured": true, 00:23:23.774 "data_offset": 0, 00:23:23.774 "data_size": 65536 00:23:23.774 } 00:23:23.774 ] 00:23:23.774 }' 00:23:23.774 13:47:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.774 13:47:02 -- common/autotest_common.sh@10 -- # set +x 00:23:24.341 13:47:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:24.341 13:47:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.341 13:47:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.341 13:47:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:24.600 13:47:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:24.600 13:47:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.600 13:47:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:24.858 [2024-07-10 13:47:04.076327] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:24.858 [2024-07-10 13:47:04.076383] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.858 [2024-07-10 13:47:04.076441] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.858 13:47:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:24.858 13:47:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.858 13:47:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.858 13:47:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:25.117 13:47:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:25.117 13:47:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.117 13:47:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:25.375 [2024-07-10 13:47:04.669630] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:25.375 [2024-07-10 13:47:04.669706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:25.633 13:47:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.633 13:47:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.634 13:47:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.634 13:47:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:25.893 13:47:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:25.893 13:47:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:25.893 13:47:05 -- bdev/bdev_raid.sh@287 -- # killprocess 130494 00:23:25.893 13:47:05 -- common/autotest_common.sh@926 -- # '[' -z 130494 ']' 00:23:25.893 13:47:05 -- common/autotest_common.sh@930 -- # kill -0 130494 00:23:25.893 13:47:05 -- common/autotest_common.sh@931 -- # uname 00:23:25.893 13:47:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:25.893 13:47:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130494 00:23:25.893 killing process with pid 130494 00:23:25.893 13:47:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:25.893 13:47:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:25.893 13:47:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130494' 00:23:25.893 13:47:05 -- common/autotest_common.sh@945 -- # kill 130494 00:23:25.893 13:47:05 -- common/autotest_common.sh@950 -- # wait 130494 00:23:25.893 [2024-07-10 13:47:05.041318] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.893 [2024-07-10 13:47:05.041459] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:27.271 00:23:27.271 real 0m13.816s 00:23:27.271 user 0m24.285s 00:23:27.271 sys 0m1.425s 00:23:27.271 13:47:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.271 13:47:06 -- common/autotest_common.sh@10 -- # set +x 00:23:27.271 ************************************ 00:23:27.271 END TEST raid5f_state_function_test 00:23:27.271 ************************************ 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:27.271 13:47:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:27.271 13:47:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.271 13:47:06 -- common/autotest_common.sh@10 -- # set +x 00:23:27.271 ************************************ 00:23:27.271 START TEST raid5f_state_function_test_sb 00:23:27.271 ************************************ 00:23:27.271 13:47:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=130913 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130913' 00:23:27.271 Process raid pid: 130913 00:23:27.271 13:47:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130913 /var/tmp/spdk-raid.sock 00:23:27.271 13:47:06 -- common/autotest_common.sh@819 -- # '[' -z 130913 ']' 00:23:27.271 13:47:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:27.271 13:47:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:27.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:27.271 13:47:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:27.271 13:47:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:27.271 13:47:06 -- common/autotest_common.sh@10 -- # set +x 00:23:27.530 [2024-07-10 13:47:06.640532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:27.530 [2024-07-10 13:47:06.640715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.530 [2024-07-10 13:47:06.793038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.789 [2024-07-10 13:47:07.023164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.048 [2024-07-10 13:47:07.262987] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.306 13:47:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:28.306 13:47:07 -- common/autotest_common.sh@852 -- # return 0 00:23:28.306 13:47:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:28.566 [2024-07-10 13:47:07.764761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:28.566 [2024-07-10 13:47:07.764843] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:28.566 [2024-07-10 13:47:07.764855] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:28.566 [2024-07-10 13:47:07.764872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:28.566 [2024-07-10 13:47:07.764878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:28.566 [2024-07-10 13:47:07.764914] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.566 13:47:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.824 13:47:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.824 "name": "Existed_Raid", 00:23:28.824 "uuid": "808f68d2-5c25-4c65-969f-fb1a634fbcbe", 00:23:28.824 "strip_size_kb": 64, 00:23:28.824 "state": "configuring", 00:23:28.824 "raid_level": "raid5f", 00:23:28.824 "superblock": true, 00:23:28.824 "num_base_bdevs": 3, 00:23:28.824 "num_base_bdevs_discovered": 0, 00:23:28.824 "num_base_bdevs_operational": 3, 00:23:28.824 "base_bdevs_list": [ 00:23:28.824 { 00:23:28.824 "name": "BaseBdev1", 00:23:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.824 "is_configured": false, 00:23:28.824 "data_offset": 0, 00:23:28.824 "data_size": 0 00:23:28.824 }, 00:23:28.824 { 00:23:28.824 "name": "BaseBdev2", 00:23:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.824 "is_configured": false, 00:23:28.824 "data_offset": 0, 00:23:28.824 "data_size": 0 00:23:28.824 }, 00:23:28.824 { 00:23:28.824 "name": "BaseBdev3", 00:23:28.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.824 "is_configured": false, 00:23:28.824 "data_offset": 0, 00:23:28.824 "data_size": 0 00:23:28.824 } 00:23:28.825 ] 00:23:28.825 }' 00:23:28.825 13:47:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.825 13:47:08 -- common/autotest_common.sh@10 -- # set +x 00:23:29.391 13:47:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:29.650 [2024-07-10 13:47:08.906749] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:29.650 [2024-07-10 13:47:08.906804] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:29.650 13:47:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:29.913 [2024-07-10 13:47:09.182357] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:29.913 [2024-07-10 13:47:09.182453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:29.913 [2024-07-10 13:47:09.182464] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:29.913 [2024-07-10 13:47:09.182479] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:29.913 [2024-07-10 13:47:09.182485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:29.913 [2024-07-10 13:47:09.182514] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:29.913 13:47:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:30.176 [2024-07-10 13:47:09.507839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:30.176 BaseBdev1 00:23:30.176 13:47:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:30.176 13:47:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:30.176 13:47:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:30.176 13:47:09 -- common/autotest_common.sh@889 -- # local i 00:23:30.176 13:47:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:30.176 13:47:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:30.176 13:47:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:30.435 13:47:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:31.001 [ 00:23:31.001 { 00:23:31.001 "name": "BaseBdev1", 00:23:31.001 "aliases": [ 00:23:31.001 "fed38d66-8498-48d5-98da-76b2717f7289" 00:23:31.001 ], 00:23:31.001 "product_name": "Malloc disk", 00:23:31.001 "block_size": 512, 00:23:31.001 "num_blocks": 65536, 00:23:31.001 "uuid": "fed38d66-8498-48d5-98da-76b2717f7289", 00:23:31.001 "assigned_rate_limits": { 00:23:31.001 "rw_ios_per_sec": 0, 00:23:31.001 "rw_mbytes_per_sec": 0, 00:23:31.001 "r_mbytes_per_sec": 0, 00:23:31.001 "w_mbytes_per_sec": 0 00:23:31.001 }, 00:23:31.001 "claimed": true, 00:23:31.001 "claim_type": "exclusive_write", 00:23:31.001 "zoned": false, 00:23:31.001 "supported_io_types": { 00:23:31.001 "read": true, 00:23:31.001 "write": true, 00:23:31.001 "unmap": true, 00:23:31.001 "write_zeroes": true, 00:23:31.001 "flush": true, 00:23:31.001 "reset": true, 00:23:31.001 "compare": false, 00:23:31.001 "compare_and_write": false, 00:23:31.001 "abort": true, 00:23:31.001 "nvme_admin": false, 00:23:31.001 "nvme_io": false 00:23:31.001 }, 00:23:31.001 "memory_domains": [ 00:23:31.001 { 00:23:31.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.001 "dma_device_type": 2 00:23:31.001 } 00:23:31.001 ], 00:23:31.001 "driver_specific": {} 00:23:31.001 } 00:23:31.001 ] 00:23:31.001 13:47:10 -- common/autotest_common.sh@895 -- # return 0 00:23:31.001 13:47:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:31.001 13:47:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.002 13:47:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.260 13:47:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.260 "name": "Existed_Raid", 00:23:31.260 "uuid": "57242be7-3c69-48a7-b8aa-cd573f657e0f", 00:23:31.260 "strip_size_kb": 64, 00:23:31.260 "state": "configuring", 00:23:31.260 "raid_level": "raid5f", 00:23:31.260 "superblock": true, 00:23:31.260 "num_base_bdevs": 3, 00:23:31.260 "num_base_bdevs_discovered": 1, 00:23:31.260 "num_base_bdevs_operational": 3, 00:23:31.260 "base_bdevs_list": [ 00:23:31.260 { 00:23:31.260 "name": "BaseBdev1", 00:23:31.260 "uuid": "fed38d66-8498-48d5-98da-76b2717f7289", 00:23:31.260 "is_configured": true, 00:23:31.260 "data_offset": 2048, 00:23:31.260 "data_size": 63488 00:23:31.260 }, 00:23:31.260 { 00:23:31.260 "name": "BaseBdev2", 00:23:31.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.260 "is_configured": false, 00:23:31.260 "data_offset": 0, 00:23:31.260 "data_size": 0 00:23:31.260 }, 00:23:31.260 { 00:23:31.260 "name": "BaseBdev3", 00:23:31.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.260 "is_configured": false, 00:23:31.260 "data_offset": 0, 00:23:31.260 "data_size": 0 00:23:31.260 } 00:23:31.260 ] 00:23:31.260 }' 00:23:31.260 13:47:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.260 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:23:31.826 13:47:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:32.084 [2024-07-10 13:47:11.335508] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:32.084 [2024-07-10 13:47:11.335599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:32.084 13:47:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:32.084 13:47:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:32.650 13:47:11 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:32.947 BaseBdev1 00:23:32.947 13:47:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:32.947 13:47:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:32.947 13:47:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:32.947 13:47:12 -- common/autotest_common.sh@889 -- # local i 00:23:32.947 13:47:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:32.947 13:47:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:32.947 13:47:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.947 13:47:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:33.205 [ 00:23:33.205 { 00:23:33.205 "name": "BaseBdev1", 00:23:33.205 "aliases": [ 00:23:33.205 "1d75d349-3434-48ea-88b3-f0f5c999f729" 00:23:33.205 ], 00:23:33.205 "product_name": "Malloc disk", 00:23:33.205 "block_size": 512, 00:23:33.205 "num_blocks": 65536, 00:23:33.205 "uuid": "1d75d349-3434-48ea-88b3-f0f5c999f729", 00:23:33.205 "assigned_rate_limits": { 00:23:33.205 "rw_ios_per_sec": 0, 00:23:33.205 "rw_mbytes_per_sec": 0, 00:23:33.205 "r_mbytes_per_sec": 0, 00:23:33.205 "w_mbytes_per_sec": 0 00:23:33.205 }, 00:23:33.205 "claimed": false, 00:23:33.205 "zoned": false, 00:23:33.205 "supported_io_types": { 00:23:33.205 "read": true, 00:23:33.205 "write": true, 00:23:33.205 "unmap": true, 00:23:33.205 "write_zeroes": true, 00:23:33.205 "flush": true, 00:23:33.205 "reset": true, 00:23:33.205 "compare": false, 00:23:33.205 "compare_and_write": false, 00:23:33.205 "abort": true, 00:23:33.205 "nvme_admin": false, 00:23:33.205 "nvme_io": false 00:23:33.205 }, 00:23:33.205 "memory_domains": [ 00:23:33.205 { 00:23:33.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.205 "dma_device_type": 2 00:23:33.205 } 00:23:33.205 ], 00:23:33.205 "driver_specific": {} 00:23:33.205 } 00:23:33.205 ] 00:23:33.205 13:47:12 -- common/autotest_common.sh@895 -- # return 0 00:23:33.205 13:47:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:33.464 [2024-07-10 13:47:12.804469] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:33.464 [2024-07-10 13:47:12.806383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:33.464 [2024-07-10 13:47:12.806468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:33.464 [2024-07-10 13:47:12.806477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:33.464 [2024-07-10 13:47:12.806500] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.722 13:47:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.722 13:47:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.722 "name": "Existed_Raid", 00:23:33.722 "uuid": "462ab899-3002-4e9b-a0d7-140c57559de1", 00:23:33.722 "strip_size_kb": 64, 00:23:33.722 "state": "configuring", 00:23:33.722 "raid_level": "raid5f", 00:23:33.722 "superblock": true, 00:23:33.722 "num_base_bdevs": 3, 00:23:33.722 "num_base_bdevs_discovered": 1, 00:23:33.722 "num_base_bdevs_operational": 3, 00:23:33.722 "base_bdevs_list": [ 00:23:33.722 { 00:23:33.722 "name": "BaseBdev1", 00:23:33.722 "uuid": "1d75d349-3434-48ea-88b3-f0f5c999f729", 00:23:33.722 "is_configured": true, 00:23:33.722 "data_offset": 2048, 00:23:33.722 "data_size": 63488 00:23:33.722 }, 00:23:33.722 { 00:23:33.722 "name": "BaseBdev2", 00:23:33.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.722 "is_configured": false, 00:23:33.722 "data_offset": 0, 00:23:33.722 "data_size": 0 00:23:33.723 }, 00:23:33.723 { 00:23:33.723 "name": "BaseBdev3", 00:23:33.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.723 "is_configured": false, 00:23:33.723 "data_offset": 0, 00:23:33.723 "data_size": 0 00:23:33.723 } 00:23:33.723 ] 00:23:33.723 }' 00:23:33.723 13:47:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.723 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:23:34.657 13:47:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.657 [2024-07-10 13:47:13.974994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.657 BaseBdev2 00:23:34.657 13:47:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:34.657 13:47:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:34.657 13:47:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:34.657 13:47:13 -- common/autotest_common.sh@889 -- # local i 00:23:34.657 13:47:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:34.657 13:47:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:34.657 13:47:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.915 13:47:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:35.173 [ 00:23:35.173 { 00:23:35.173 "name": "BaseBdev2", 00:23:35.173 "aliases": [ 00:23:35.173 "cdf60d0f-56fd-4604-8a2b-2580afe3d0cf" 00:23:35.173 ], 00:23:35.173 "product_name": "Malloc disk", 00:23:35.173 "block_size": 512, 00:23:35.173 "num_blocks": 65536, 00:23:35.173 "uuid": "cdf60d0f-56fd-4604-8a2b-2580afe3d0cf", 00:23:35.173 "assigned_rate_limits": { 00:23:35.173 "rw_ios_per_sec": 0, 00:23:35.173 "rw_mbytes_per_sec": 0, 00:23:35.173 "r_mbytes_per_sec": 0, 00:23:35.173 "w_mbytes_per_sec": 0 00:23:35.173 }, 00:23:35.173 "claimed": true, 00:23:35.173 "claim_type": "exclusive_write", 00:23:35.173 "zoned": false, 00:23:35.173 "supported_io_types": { 00:23:35.173 "read": true, 00:23:35.173 "write": true, 00:23:35.173 "unmap": true, 00:23:35.173 "write_zeroes": true, 00:23:35.173 "flush": true, 00:23:35.173 "reset": true, 00:23:35.173 "compare": false, 00:23:35.173 "compare_and_write": false, 00:23:35.173 "abort": true, 00:23:35.173 "nvme_admin": false, 00:23:35.173 "nvme_io": false 00:23:35.173 }, 00:23:35.173 "memory_domains": [ 00:23:35.173 { 00:23:35.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.173 "dma_device_type": 2 00:23:35.173 } 00:23:35.173 ], 00:23:35.173 "driver_specific": {} 00:23:35.173 } 00:23:35.173 ] 00:23:35.173 13:47:14 -- common/autotest_common.sh@895 -- # return 0 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.173 13:47:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.431 13:47:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.431 "name": "Existed_Raid", 00:23:35.431 "uuid": "462ab899-3002-4e9b-a0d7-140c57559de1", 00:23:35.431 "strip_size_kb": 64, 00:23:35.431 "state": "configuring", 00:23:35.431 "raid_level": "raid5f", 00:23:35.431 "superblock": true, 00:23:35.431 "num_base_bdevs": 3, 00:23:35.431 "num_base_bdevs_discovered": 2, 00:23:35.431 "num_base_bdevs_operational": 3, 00:23:35.431 "base_bdevs_list": [ 00:23:35.431 { 00:23:35.431 "name": "BaseBdev1", 00:23:35.431 "uuid": "1d75d349-3434-48ea-88b3-f0f5c999f729", 00:23:35.431 "is_configured": true, 00:23:35.431 "data_offset": 2048, 00:23:35.431 "data_size": 63488 00:23:35.431 }, 00:23:35.431 { 00:23:35.431 "name": "BaseBdev2", 00:23:35.431 "uuid": "cdf60d0f-56fd-4604-8a2b-2580afe3d0cf", 00:23:35.431 "is_configured": true, 00:23:35.431 "data_offset": 2048, 00:23:35.431 "data_size": 63488 00:23:35.431 }, 00:23:35.431 { 00:23:35.431 "name": "BaseBdev3", 00:23:35.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.431 "is_configured": false, 00:23:35.431 "data_offset": 0, 00:23:35.431 "data_size": 0 00:23:35.431 } 00:23:35.431 ] 00:23:35.431 }' 00:23:35.431 13:47:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.431 13:47:14 -- common/autotest_common.sh@10 -- # set +x 00:23:36.365 13:47:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:36.365 [2024-07-10 13:47:15.694429] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.365 [2024-07-10 13:47:15.694668] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:36.365 [2024-07-10 13:47:15.694687] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:36.365 [2024-07-10 13:47:15.694917] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:36.365 BaseBdev3 00:23:36.365 [2024-07-10 13:47:15.702150] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:36.365 [2024-07-10 13:47:15.702196] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:23:36.365 [2024-07-10 13:47:15.702472] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.365 13:47:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:36.365 13:47:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:36.365 13:47:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:36.365 13:47:15 -- common/autotest_common.sh@889 -- # local i 00:23:36.365 13:47:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:36.365 13:47:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:36.365 13:47:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.623 13:47:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:36.955 [ 00:23:36.955 { 00:23:36.955 "name": "BaseBdev3", 00:23:36.955 "aliases": [ 00:23:36.955 "8d95fb1a-3a23-4e41-8dc5-14717b3933ff" 00:23:36.955 ], 00:23:36.955 "product_name": "Malloc disk", 00:23:36.955 "block_size": 512, 00:23:36.955 "num_blocks": 65536, 00:23:36.955 "uuid": "8d95fb1a-3a23-4e41-8dc5-14717b3933ff", 00:23:36.955 "assigned_rate_limits": { 00:23:36.955 "rw_ios_per_sec": 0, 00:23:36.955 "rw_mbytes_per_sec": 0, 00:23:36.955 "r_mbytes_per_sec": 0, 00:23:36.955 "w_mbytes_per_sec": 0 00:23:36.955 }, 00:23:36.955 "claimed": true, 00:23:36.955 "claim_type": "exclusive_write", 00:23:36.955 "zoned": false, 00:23:36.955 "supported_io_types": { 00:23:36.955 "read": true, 00:23:36.955 "write": true, 00:23:36.955 "unmap": true, 00:23:36.955 "write_zeroes": true, 00:23:36.955 "flush": true, 00:23:36.955 "reset": true, 00:23:36.955 "compare": false, 00:23:36.955 "compare_and_write": false, 00:23:36.955 "abort": true, 00:23:36.955 "nvme_admin": false, 00:23:36.955 "nvme_io": false 00:23:36.955 }, 00:23:36.955 "memory_domains": [ 00:23:36.955 { 00:23:36.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.955 "dma_device_type": 2 00:23:36.955 } 00:23:36.955 ], 00:23:36.955 "driver_specific": {} 00:23:36.955 } 00:23:36.955 ] 00:23:36.955 13:47:16 -- common/autotest_common.sh@895 -- # return 0 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.955 13:47:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.235 13:47:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.235 "name": "Existed_Raid", 00:23:37.235 "uuid": "462ab899-3002-4e9b-a0d7-140c57559de1", 00:23:37.235 "strip_size_kb": 64, 00:23:37.235 "state": "online", 00:23:37.235 "raid_level": "raid5f", 00:23:37.235 "superblock": true, 00:23:37.235 "num_base_bdevs": 3, 00:23:37.235 "num_base_bdevs_discovered": 3, 00:23:37.235 "num_base_bdevs_operational": 3, 00:23:37.235 "base_bdevs_list": [ 00:23:37.235 { 00:23:37.235 "name": "BaseBdev1", 00:23:37.235 "uuid": "1d75d349-3434-48ea-88b3-f0f5c999f729", 00:23:37.235 "is_configured": true, 00:23:37.236 "data_offset": 2048, 00:23:37.236 "data_size": 63488 00:23:37.236 }, 00:23:37.236 { 00:23:37.236 "name": "BaseBdev2", 00:23:37.236 "uuid": "cdf60d0f-56fd-4604-8a2b-2580afe3d0cf", 00:23:37.236 "is_configured": true, 00:23:37.236 "data_offset": 2048, 00:23:37.236 "data_size": 63488 00:23:37.236 }, 00:23:37.236 { 00:23:37.236 "name": "BaseBdev3", 00:23:37.236 "uuid": "8d95fb1a-3a23-4e41-8dc5-14717b3933ff", 00:23:37.236 "is_configured": true, 00:23:37.236 "data_offset": 2048, 00:23:37.236 "data_size": 63488 00:23:37.236 } 00:23:37.236 ] 00:23:37.236 }' 00:23:37.236 13:47:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.236 13:47:16 -- common/autotest_common.sh@10 -- # set +x 00:23:38.174 13:47:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:38.174 [2024-07-10 13:47:17.420355] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.433 13:47:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.692 13:47:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.692 "name": "Existed_Raid", 00:23:38.692 "uuid": "462ab899-3002-4e9b-a0d7-140c57559de1", 00:23:38.692 "strip_size_kb": 64, 00:23:38.692 "state": "online", 00:23:38.692 "raid_level": "raid5f", 00:23:38.692 "superblock": true, 00:23:38.692 "num_base_bdevs": 3, 00:23:38.692 "num_base_bdevs_discovered": 2, 00:23:38.692 "num_base_bdevs_operational": 2, 00:23:38.692 "base_bdevs_list": [ 00:23:38.692 { 00:23:38.692 "name": null, 00:23:38.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.692 "is_configured": false, 00:23:38.692 "data_offset": 2048, 00:23:38.692 "data_size": 63488 00:23:38.692 }, 00:23:38.692 { 00:23:38.692 "name": "BaseBdev2", 00:23:38.692 "uuid": "cdf60d0f-56fd-4604-8a2b-2580afe3d0cf", 00:23:38.692 "is_configured": true, 00:23:38.692 "data_offset": 2048, 00:23:38.692 "data_size": 63488 00:23:38.692 }, 00:23:38.692 { 00:23:38.692 "name": "BaseBdev3", 00:23:38.692 "uuid": "8d95fb1a-3a23-4e41-8dc5-14717b3933ff", 00:23:38.692 "is_configured": true, 00:23:38.692 "data_offset": 2048, 00:23:38.692 "data_size": 63488 00:23:38.692 } 00:23:38.692 ] 00:23:38.692 }' 00:23:38.692 13:47:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.692 13:47:17 -- common/autotest_common.sh@10 -- # set +x 00:23:39.260 13:47:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:39.260 13:47:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:39.260 13:47:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.260 13:47:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:39.519 13:47:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:39.519 13:47:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:39.519 13:47:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:39.778 [2024-07-10 13:47:18.957859] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:39.778 [2024-07-10 13:47:18.957913] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.778 [2024-07-10 13:47:18.957969] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.778 13:47:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:39.778 13:47:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:39.778 13:47:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.778 13:47:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:40.043 13:47:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:40.043 13:47:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:40.043 13:47:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:40.303 [2024-07-10 13:47:19.590253] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:40.303 [2024-07-10 13:47:19.590346] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:23:40.563 13:47:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:40.563 13:47:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:40.563 13:47:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:40.563 13:47:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.822 13:47:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:40.822 13:47:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:40.822 13:47:19 -- bdev/bdev_raid.sh@287 -- # killprocess 130913 00:23:40.822 13:47:19 -- common/autotest_common.sh@926 -- # '[' -z 130913 ']' 00:23:40.822 13:47:19 -- common/autotest_common.sh@930 -- # kill -0 130913 00:23:40.822 13:47:19 -- common/autotest_common.sh@931 -- # uname 00:23:40.822 13:47:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:40.822 13:47:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130913 00:23:40.822 killing process with pid 130913 00:23:40.822 13:47:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:40.822 13:47:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:40.822 13:47:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130913' 00:23:40.822 13:47:19 -- common/autotest_common.sh@945 -- # kill 130913 00:23:40.822 13:47:19 -- common/autotest_common.sh@950 -- # wait 130913 00:23:40.822 [2024-07-10 13:47:19.988931] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.822 [2024-07-10 13:47:19.989097] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:42.199 ************************************ 00:23:42.199 END TEST raid5f_state_function_test_sb 00:23:42.199 ************************************ 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:42.199 00:23:42.199 real 0m14.909s 00:23:42.199 user 0m26.256s 00:23:42.199 sys 0m1.442s 00:23:42.199 13:47:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.199 13:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:42.199 13:47:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:42.199 13:47:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:42.199 13:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:42.199 ************************************ 00:23:42.199 START TEST raid5f_superblock_test 00:23:42.199 ************************************ 00:23:42.199 13:47:21 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@357 -- # raid_pid=131330 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131330 /var/tmp/spdk-raid.sock 00:23:42.199 13:47:21 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:42.199 13:47:21 -- common/autotest_common.sh@819 -- # '[' -z 131330 ']' 00:23:42.199 13:47:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:42.199 13:47:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:42.199 13:47:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:42.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:42.199 13:47:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:42.199 13:47:21 -- common/autotest_common.sh@10 -- # set +x 00:23:42.457 [2024-07-10 13:47:21.598104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:42.458 [2024-07-10 13:47:21.598443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131330 ] 00:23:42.458 [2024-07-10 13:47:21.777596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.717 [2024-07-10 13:47:22.014333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.977 [2024-07-10 13:47:22.247677] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.236 13:47:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:43.236 13:47:22 -- common/autotest_common.sh@852 -- # return 0 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:43.236 13:47:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:43.495 malloc1 00:23:43.495 13:47:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:43.759 [2024-07-10 13:47:23.041663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:43.759 [2024-07-10 13:47:23.041781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.759 [2024-07-10 13:47:23.041820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:43.759 [2024-07-10 13:47:23.041873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.759 [2024-07-10 13:47:23.044205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.759 [2024-07-10 13:47:23.044267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:43.759 pt1 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:43.759 13:47:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:44.018 malloc2 00:23:44.018 13:47:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:44.277 [2024-07-10 13:47:23.548704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:44.277 [2024-07-10 13:47:23.548811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.277 [2024-07-10 13:47:23.548853] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:44.277 [2024-07-10 13:47:23.548909] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.277 [2024-07-10 13:47:23.551435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.277 [2024-07-10 13:47:23.551529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:44.277 pt2 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:44.277 13:47:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:44.536 malloc3 00:23:44.795 13:47:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:44.795 [2024-07-10 13:47:24.127502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:44.795 [2024-07-10 13:47:24.127602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.795 [2024-07-10 13:47:24.127642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:44.795 [2024-07-10 13:47:24.127680] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.795 [2024-07-10 13:47:24.129951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.795 [2024-07-10 13:47:24.130017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:44.795 pt3 00:23:44.795 13:47:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:44.795 13:47:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:44.795 13:47:24 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:45.053 [2024-07-10 13:47:24.339230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:45.053 [2024-07-10 13:47:24.341175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:45.053 [2024-07-10 13:47:24.341253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:45.053 [2024-07-10 13:47:24.341447] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:45.053 [2024-07-10 13:47:24.341464] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:45.053 [2024-07-10 13:47:24.341616] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:45.053 [2024-07-10 13:47:24.348172] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:45.053 [2024-07-10 13:47:24.348206] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:45.053 [2024-07-10 13:47:24.348448] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.053 13:47:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.312 13:47:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.312 "name": "raid_bdev1", 00:23:45.312 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:45.312 "strip_size_kb": 64, 00:23:45.312 "state": "online", 00:23:45.312 "raid_level": "raid5f", 00:23:45.312 "superblock": true, 00:23:45.312 "num_base_bdevs": 3, 00:23:45.312 "num_base_bdevs_discovered": 3, 00:23:45.312 "num_base_bdevs_operational": 3, 00:23:45.312 "base_bdevs_list": [ 00:23:45.312 { 00:23:45.312 "name": "pt1", 00:23:45.312 "uuid": "1de0962e-451e-5ec8-a0b2-5c102df7a68c", 00:23:45.312 "is_configured": true, 00:23:45.312 "data_offset": 2048, 00:23:45.312 "data_size": 63488 00:23:45.312 }, 00:23:45.312 { 00:23:45.312 "name": "pt2", 00:23:45.312 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:45.312 "is_configured": true, 00:23:45.312 "data_offset": 2048, 00:23:45.312 "data_size": 63488 00:23:45.312 }, 00:23:45.312 { 00:23:45.312 "name": "pt3", 00:23:45.312 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:45.312 "is_configured": true, 00:23:45.312 "data_offset": 2048, 00:23:45.312 "data_size": 63488 00:23:45.312 } 00:23:45.312 ] 00:23:45.312 }' 00:23:45.312 13:47:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.312 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:23:46.244 13:47:25 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:46.244 13:47:25 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:46.244 [2024-07-10 13:47:25.498805] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.244 13:47:25 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f9cb1daa-8a8f-4f9e-9811-8c0973d8841a 00:23:46.244 13:47:25 -- bdev/bdev_raid.sh@380 -- # '[' -z f9cb1daa-8a8f-4f9e-9811-8c0973d8841a ']' 00:23:46.245 13:47:25 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:46.501 [2024-07-10 13:47:25.722208] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.501 [2024-07-10 13:47:25.722253] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.501 [2024-07-10 13:47:25.722349] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.501 [2024-07-10 13:47:25.722427] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.501 [2024-07-10 13:47:25.722436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:46.501 13:47:25 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.501 13:47:25 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:46.758 13:47:25 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:46.758 13:47:25 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:46.758 13:47:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:46.758 13:47:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:47.016 13:47:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:47.016 13:47:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:47.275 13:47:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:47.275 13:47:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:47.532 13:47:26 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:47.532 13:47:26 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:47.532 13:47:26 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:47.533 13:47:26 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:47.533 13:47:26 -- common/autotest_common.sh@640 -- # local es=0 00:23:47.533 13:47:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:47.533 13:47:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:47.533 13:47:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:47.533 13:47:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:47.533 13:47:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:47.533 13:47:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:47.533 13:47:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:47.533 13:47:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:47.533 13:47:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:47.533 13:47:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:47.792 [2024-07-10 13:47:27.091901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:47.792 [2024-07-10 13:47:27.093854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:47.792 [2024-07-10 13:47:27.093924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:47.792 [2024-07-10 13:47:27.093979] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:47.792 [2024-07-10 13:47:27.094060] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:47.792 [2024-07-10 13:47:27.094092] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:47.792 [2024-07-10 13:47:27.094135] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:47.792 [2024-07-10 13:47:27.094145] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:23:47.792 request: 00:23:47.792 { 00:23:47.792 "name": "raid_bdev1", 00:23:47.792 "raid_level": "raid5f", 00:23:47.792 "base_bdevs": [ 00:23:47.792 "malloc1", 00:23:47.792 "malloc2", 00:23:47.792 "malloc3" 00:23:47.792 ], 00:23:47.792 "superblock": false, 00:23:47.792 "strip_size_kb": 64, 00:23:47.792 "method": "bdev_raid_create", 00:23:47.792 "req_id": 1 00:23:47.792 } 00:23:47.792 Got JSON-RPC error response 00:23:47.792 response: 00:23:47.792 { 00:23:47.792 "code": -17, 00:23:47.792 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:47.792 } 00:23:47.792 13:47:27 -- common/autotest_common.sh@643 -- # es=1 00:23:47.792 13:47:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:47.792 13:47:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:47.792 13:47:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:47.792 13:47:27 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:47.792 13:47:27 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.051 13:47:27 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:48.051 13:47:27 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:48.051 13:47:27 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:48.311 [2024-07-10 13:47:27.580124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:48.311 [2024-07-10 13:47:27.580222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.311 [2024-07-10 13:47:27.580260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:48.311 [2024-07-10 13:47:27.580278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.311 [2024-07-10 13:47:27.582494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.311 [2024-07-10 13:47:27.582548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:48.311 [2024-07-10 13:47:27.582702] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:48.311 [2024-07-10 13:47:27.582759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:48.311 pt1 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.311 13:47:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.571 13:47:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:48.571 "name": "raid_bdev1", 00:23:48.571 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:48.571 "strip_size_kb": 64, 00:23:48.571 "state": "configuring", 00:23:48.571 "raid_level": "raid5f", 00:23:48.571 "superblock": true, 00:23:48.571 "num_base_bdevs": 3, 00:23:48.571 "num_base_bdevs_discovered": 1, 00:23:48.571 "num_base_bdevs_operational": 3, 00:23:48.571 "base_bdevs_list": [ 00:23:48.571 { 00:23:48.571 "name": "pt1", 00:23:48.571 "uuid": "1de0962e-451e-5ec8-a0b2-5c102df7a68c", 00:23:48.571 "is_configured": true, 00:23:48.571 "data_offset": 2048, 00:23:48.571 "data_size": 63488 00:23:48.571 }, 00:23:48.571 { 00:23:48.571 "name": null, 00:23:48.571 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:48.571 "is_configured": false, 00:23:48.571 "data_offset": 2048, 00:23:48.571 "data_size": 63488 00:23:48.571 }, 00:23:48.571 { 00:23:48.571 "name": null, 00:23:48.571 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:48.571 "is_configured": false, 00:23:48.571 "data_offset": 2048, 00:23:48.571 "data_size": 63488 00:23:48.571 } 00:23:48.571 ] 00:23:48.571 }' 00:23:48.571 13:47:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:48.571 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:23:49.509 13:47:28 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:49.509 13:47:28 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:49.509 [2024-07-10 13:47:28.726160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:49.509 [2024-07-10 13:47:28.726277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.509 [2024-07-10 13:47:28.726329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:49.509 [2024-07-10 13:47:28.726359] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.509 [2024-07-10 13:47:28.726839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.509 [2024-07-10 13:47:28.726876] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:49.509 [2024-07-10 13:47:28.727022] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:49.509 [2024-07-10 13:47:28.727054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:49.509 pt2 00:23:49.509 13:47:28 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:49.768 [2024-07-10 13:47:28.961810] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.768 13:47:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.026 13:47:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.026 "name": "raid_bdev1", 00:23:50.026 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:50.026 "strip_size_kb": 64, 00:23:50.026 "state": "configuring", 00:23:50.026 "raid_level": "raid5f", 00:23:50.026 "superblock": true, 00:23:50.026 "num_base_bdevs": 3, 00:23:50.026 "num_base_bdevs_discovered": 1, 00:23:50.026 "num_base_bdevs_operational": 3, 00:23:50.026 "base_bdevs_list": [ 00:23:50.026 { 00:23:50.026 "name": "pt1", 00:23:50.026 "uuid": "1de0962e-451e-5ec8-a0b2-5c102df7a68c", 00:23:50.026 "is_configured": true, 00:23:50.026 "data_offset": 2048, 00:23:50.026 "data_size": 63488 00:23:50.026 }, 00:23:50.026 { 00:23:50.026 "name": null, 00:23:50.026 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:50.026 "is_configured": false, 00:23:50.026 "data_offset": 2048, 00:23:50.026 "data_size": 63488 00:23:50.026 }, 00:23:50.026 { 00:23:50.026 "name": null, 00:23:50.026 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:50.026 "is_configured": false, 00:23:50.026 "data_offset": 2048, 00:23:50.026 "data_size": 63488 00:23:50.026 } 00:23:50.026 ] 00:23:50.026 }' 00:23:50.026 13:47:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.026 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:23:50.594 13:47:29 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:50.594 13:47:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:50.594 13:47:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:50.853 [2024-07-10 13:47:30.099887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:50.853 [2024-07-10 13:47:30.099995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.853 [2024-07-10 13:47:30.100033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:50.853 [2024-07-10 13:47:30.100058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.853 [2024-07-10 13:47:30.100568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.853 [2024-07-10 13:47:30.100612] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:50.853 [2024-07-10 13:47:30.100761] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:50.853 [2024-07-10 13:47:30.100792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:50.853 pt2 00:23:50.853 13:47:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:50.853 13:47:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:50.853 13:47:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:51.111 [2024-07-10 13:47:30.339498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:51.111 [2024-07-10 13:47:30.339605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.111 [2024-07-10 13:47:30.339641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:51.111 [2024-07-10 13:47:30.339666] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.111 [2024-07-10 13:47:30.340163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.111 [2024-07-10 13:47:30.340208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:51.111 [2024-07-10 13:47:30.340359] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:51.111 [2024-07-10 13:47:30.340392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:51.111 [2024-07-10 13:47:30.340534] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:51.111 [2024-07-10 13:47:30.340552] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:51.111 [2024-07-10 13:47:30.340678] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:51.111 [2024-07-10 13:47:30.346880] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:51.111 [2024-07-10 13:47:30.346920] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:51.111 [2024-07-10 13:47:30.347167] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.111 pt3 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.111 13:47:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.369 13:47:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:51.369 "name": "raid_bdev1", 00:23:51.369 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:51.369 "strip_size_kb": 64, 00:23:51.369 "state": "online", 00:23:51.369 "raid_level": "raid5f", 00:23:51.369 "superblock": true, 00:23:51.369 "num_base_bdevs": 3, 00:23:51.369 "num_base_bdevs_discovered": 3, 00:23:51.369 "num_base_bdevs_operational": 3, 00:23:51.369 "base_bdevs_list": [ 00:23:51.369 { 00:23:51.369 "name": "pt1", 00:23:51.369 "uuid": "1de0962e-451e-5ec8-a0b2-5c102df7a68c", 00:23:51.369 "is_configured": true, 00:23:51.369 "data_offset": 2048, 00:23:51.369 "data_size": 63488 00:23:51.369 }, 00:23:51.369 { 00:23:51.369 "name": "pt2", 00:23:51.370 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:51.370 "is_configured": true, 00:23:51.370 "data_offset": 2048, 00:23:51.370 "data_size": 63488 00:23:51.370 }, 00:23:51.370 { 00:23:51.370 "name": "pt3", 00:23:51.370 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:51.370 "is_configured": true, 00:23:51.370 "data_offset": 2048, 00:23:51.370 "data_size": 63488 00:23:51.370 } 00:23:51.370 ] 00:23:51.370 }' 00:23:51.370 13:47:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:51.370 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:52.304 [2024-07-10 13:47:31.488906] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@430 -- # '[' f9cb1daa-8a8f-4f9e-9811-8c0973d8841a '!=' f9cb1daa-8a8f-4f9e-9811-8c0973d8841a ']' 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:52.304 13:47:31 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:52.562 [2024-07-10 13:47:31.688400] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.562 "name": "raid_bdev1", 00:23:52.562 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:52.562 "strip_size_kb": 64, 00:23:52.562 "state": "online", 00:23:52.562 "raid_level": "raid5f", 00:23:52.562 "superblock": true, 00:23:52.562 "num_base_bdevs": 3, 00:23:52.562 "num_base_bdevs_discovered": 2, 00:23:52.562 "num_base_bdevs_operational": 2, 00:23:52.562 "base_bdevs_list": [ 00:23:52.562 { 00:23:52.562 "name": null, 00:23:52.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.562 "is_configured": false, 00:23:52.562 "data_offset": 2048, 00:23:52.562 "data_size": 63488 00:23:52.562 }, 00:23:52.562 { 00:23:52.562 "name": "pt2", 00:23:52.562 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:52.562 "is_configured": true, 00:23:52.562 "data_offset": 2048, 00:23:52.562 "data_size": 63488 00:23:52.562 }, 00:23:52.562 { 00:23:52.562 "name": "pt3", 00:23:52.562 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:52.562 "is_configured": true, 00:23:52.562 "data_offset": 2048, 00:23:52.562 "data_size": 63488 00:23:52.562 } 00:23:52.562 ] 00:23:52.562 }' 00:23:52.562 13:47:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.562 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:23:53.499 13:47:32 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:53.499 [2024-07-10 13:47:32.758588] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.499 [2024-07-10 13:47:32.758659] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.499 [2024-07-10 13:47:32.758765] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.499 [2024-07-10 13:47:32.758845] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.499 [2024-07-10 13:47:32.758855] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:53.499 13:47:32 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:53.499 13:47:32 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.758 13:47:32 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:53.758 13:47:32 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:53.758 13:47:32 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:53.758 13:47:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:53.758 13:47:32 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:54.016 13:47:33 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:54.016 13:47:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:54.016 13:47:33 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:54.275 13:47:33 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:54.275 13:47:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:54.275 13:47:33 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:54.275 13:47:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:54.275 13:47:33 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:54.532 [2024-07-10 13:47:33.642961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:54.532 [2024-07-10 13:47:33.643059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.532 [2024-07-10 13:47:33.643097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:54.532 [2024-07-10 13:47:33.643119] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.532 [2024-07-10 13:47:33.645397] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.532 [2024-07-10 13:47:33.645456] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:54.532 [2024-07-10 13:47:33.645603] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:54.532 [2024-07-10 13:47:33.645676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:54.532 pt2 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.532 13:47:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.826 13:47:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:54.826 "name": "raid_bdev1", 00:23:54.826 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:54.826 "strip_size_kb": 64, 00:23:54.826 "state": "configuring", 00:23:54.826 "raid_level": "raid5f", 00:23:54.826 "superblock": true, 00:23:54.826 "num_base_bdevs": 3, 00:23:54.826 "num_base_bdevs_discovered": 1, 00:23:54.826 "num_base_bdevs_operational": 2, 00:23:54.826 "base_bdevs_list": [ 00:23:54.826 { 00:23:54.826 "name": null, 00:23:54.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.826 "is_configured": false, 00:23:54.826 "data_offset": 2048, 00:23:54.826 "data_size": 63488 00:23:54.826 }, 00:23:54.826 { 00:23:54.826 "name": "pt2", 00:23:54.826 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:54.826 "is_configured": true, 00:23:54.826 "data_offset": 2048, 00:23:54.826 "data_size": 63488 00:23:54.826 }, 00:23:54.826 { 00:23:54.826 "name": null, 00:23:54.826 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:54.826 "is_configured": false, 00:23:54.826 "data_offset": 2048, 00:23:54.826 "data_size": 63488 00:23:54.826 } 00:23:54.826 ] 00:23:54.826 }' 00:23:54.826 13:47:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:54.826 13:47:33 -- common/autotest_common.sh@10 -- # set +x 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:55.401 [2024-07-10 13:47:34.725105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:55.401 [2024-07-10 13:47:34.725201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.401 [2024-07-10 13:47:34.725243] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:55.401 [2024-07-10 13:47:34.725267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.401 [2024-07-10 13:47:34.725742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.401 [2024-07-10 13:47:34.725773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:55.401 [2024-07-10 13:47:34.725903] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:55.401 [2024-07-10 13:47:34.725927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:55.401 [2024-07-10 13:47:34.726047] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:55.401 [2024-07-10 13:47:34.726064] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:55.401 [2024-07-10 13:47:34.726152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:55.401 [2024-07-10 13:47:34.732112] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:55.401 [2024-07-10 13:47:34.732139] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:55.401 [2024-07-10 13:47:34.732468] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.401 pt3 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.401 13:47:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.660 13:47:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.660 "name": "raid_bdev1", 00:23:55.660 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:55.660 "strip_size_kb": 64, 00:23:55.660 "state": "online", 00:23:55.660 "raid_level": "raid5f", 00:23:55.660 "superblock": true, 00:23:55.660 "num_base_bdevs": 3, 00:23:55.660 "num_base_bdevs_discovered": 2, 00:23:55.660 "num_base_bdevs_operational": 2, 00:23:55.660 "base_bdevs_list": [ 00:23:55.660 { 00:23:55.660 "name": null, 00:23:55.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.660 "is_configured": false, 00:23:55.660 "data_offset": 2048, 00:23:55.660 "data_size": 63488 00:23:55.660 }, 00:23:55.660 { 00:23:55.660 "name": "pt2", 00:23:55.660 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:55.660 "is_configured": true, 00:23:55.660 "data_offset": 2048, 00:23:55.660 "data_size": 63488 00:23:55.660 }, 00:23:55.660 { 00:23:55.660 "name": "pt3", 00:23:55.660 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:55.660 "is_configured": true, 00:23:55.660 "data_offset": 2048, 00:23:55.660 "data_size": 63488 00:23:55.660 } 00:23:55.660 ] 00:23:55.660 }' 00:23:55.660 13:47:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.660 13:47:34 -- common/autotest_common.sh@10 -- # set +x 00:23:56.597 13:47:35 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:56.597 13:47:35 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:56.597 [2024-07-10 13:47:35.810318] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:56.597 [2024-07-10 13:47:35.810369] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:56.597 [2024-07-10 13:47:35.810459] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.597 [2024-07-10 13:47:35.810526] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.597 [2024-07-10 13:47:35.810536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:56.597 13:47:35 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:56.597 13:47:35 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.856 13:47:36 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:56.856 13:47:36 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:56.856 13:47:36 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:57.115 [2024-07-10 13:47:36.265559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:57.115 [2024-07-10 13:47:36.265667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.115 [2024-07-10 13:47:36.265708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:57.115 [2024-07-10 13:47:36.265726] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.115 [2024-07-10 13:47:36.268514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.115 [2024-07-10 13:47:36.268613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:57.115 [2024-07-10 13:47:36.268797] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:57.115 [2024-07-10 13:47:36.268888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:57.115 pt1 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.115 13:47:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.375 13:47:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.375 "name": "raid_bdev1", 00:23:57.375 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:57.375 "strip_size_kb": 64, 00:23:57.375 "state": "configuring", 00:23:57.375 "raid_level": "raid5f", 00:23:57.375 "superblock": true, 00:23:57.375 "num_base_bdevs": 3, 00:23:57.375 "num_base_bdevs_discovered": 1, 00:23:57.375 "num_base_bdevs_operational": 3, 00:23:57.375 "base_bdevs_list": [ 00:23:57.375 { 00:23:57.375 "name": "pt1", 00:23:57.375 "uuid": "1de0962e-451e-5ec8-a0b2-5c102df7a68c", 00:23:57.375 "is_configured": true, 00:23:57.375 "data_offset": 2048, 00:23:57.375 "data_size": 63488 00:23:57.375 }, 00:23:57.375 { 00:23:57.375 "name": null, 00:23:57.375 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:57.375 "is_configured": false, 00:23:57.375 "data_offset": 2048, 00:23:57.375 "data_size": 63488 00:23:57.375 }, 00:23:57.375 { 00:23:57.375 "name": null, 00:23:57.375 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:57.375 "is_configured": false, 00:23:57.375 "data_offset": 2048, 00:23:57.375 "data_size": 63488 00:23:57.375 } 00:23:57.375 ] 00:23:57.375 }' 00:23:57.375 13:47:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.375 13:47:36 -- common/autotest_common.sh@10 -- # set +x 00:23:57.965 13:47:37 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:57.965 13:47:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:57.965 13:47:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:58.222 13:47:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:58.223 13:47:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:58.223 13:47:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:58.480 13:47:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:58.480 13:47:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:58.480 13:47:37 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:58.480 13:47:37 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:58.739 [2024-07-10 13:47:37.928171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:58.739 [2024-07-10 13:47:37.928271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.739 [2024-07-10 13:47:37.928305] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:58.739 [2024-07-10 13:47:37.928336] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.739 [2024-07-10 13:47:37.928820] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.739 [2024-07-10 13:47:37.928865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:58.739 [2024-07-10 13:47:37.929000] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:58.739 [2024-07-10 13:47:37.929019] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:58.739 [2024-07-10 13:47:37.929026] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:58.739 [2024-07-10 13:47:37.929052] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:58.739 [2024-07-10 13:47:37.929146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:58.739 pt3 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.739 13:47:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.997 13:47:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:58.997 "name": "raid_bdev1", 00:23:58.997 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:23:58.997 "strip_size_kb": 64, 00:23:58.997 "state": "configuring", 00:23:58.997 "raid_level": "raid5f", 00:23:58.997 "superblock": true, 00:23:58.997 "num_base_bdevs": 3, 00:23:58.997 "num_base_bdevs_discovered": 1, 00:23:58.997 "num_base_bdevs_operational": 2, 00:23:58.997 "base_bdevs_list": [ 00:23:58.997 { 00:23:58.997 "name": null, 00:23:58.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.997 "is_configured": false, 00:23:58.997 "data_offset": 2048, 00:23:58.997 "data_size": 63488 00:23:58.997 }, 00:23:58.997 { 00:23:58.997 "name": null, 00:23:58.997 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:23:58.997 "is_configured": false, 00:23:58.997 "data_offset": 2048, 00:23:58.997 "data_size": 63488 00:23:58.997 }, 00:23:58.997 { 00:23:58.997 "name": "pt3", 00:23:58.997 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:23:58.997 "is_configured": true, 00:23:58.997 "data_offset": 2048, 00:23:58.997 "data_size": 63488 00:23:58.997 } 00:23:58.997 ] 00:23:58.997 }' 00:23:58.997 13:47:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:58.997 13:47:38 -- common/autotest_common.sh@10 -- # set +x 00:23:59.564 13:47:38 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:59.564 13:47:38 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:59.564 13:47:38 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:59.822 [2024-07-10 13:47:39.042491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:59.822 [2024-07-10 13:47:39.043011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.822 [2024-07-10 13:47:39.043184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:59.822 [2024-07-10 13:47:39.043291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.822 [2024-07-10 13:47:39.043885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.822 [2024-07-10 13:47:39.044036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:59.822 [2024-07-10 13:47:39.044275] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:59.822 [2024-07-10 13:47:39.044335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:59.822 [2024-07-10 13:47:39.044485] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:23:59.822 [2024-07-10 13:47:39.044500] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:59.822 [2024-07-10 13:47:39.044624] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:59.822 [2024-07-10 13:47:39.051007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:23:59.822 [2024-07-10 13:47:39.051046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:23:59.822 [2024-07-10 13:47:39.051385] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.822 pt2 00:23:59.822 13:47:39 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:59.822 13:47:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:59.822 13:47:39 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:59.822 13:47:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:59.822 13:47:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.823 13:47:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.080 13:47:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.080 "name": "raid_bdev1", 00:24:00.080 "uuid": "f9cb1daa-8a8f-4f9e-9811-8c0973d8841a", 00:24:00.080 "strip_size_kb": 64, 00:24:00.080 "state": "online", 00:24:00.080 "raid_level": "raid5f", 00:24:00.080 "superblock": true, 00:24:00.080 "num_base_bdevs": 3, 00:24:00.080 "num_base_bdevs_discovered": 2, 00:24:00.080 "num_base_bdevs_operational": 2, 00:24:00.080 "base_bdevs_list": [ 00:24:00.080 { 00:24:00.080 "name": null, 00:24:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.080 "is_configured": false, 00:24:00.080 "data_offset": 2048, 00:24:00.080 "data_size": 63488 00:24:00.080 }, 00:24:00.080 { 00:24:00.080 "name": "pt2", 00:24:00.080 "uuid": "6fd58370-cf46-50c3-a5b9-78e144431004", 00:24:00.080 "is_configured": true, 00:24:00.080 "data_offset": 2048, 00:24:00.080 "data_size": 63488 00:24:00.080 }, 00:24:00.080 { 00:24:00.080 "name": "pt3", 00:24:00.080 "uuid": "d1a0ec99-4239-5097-83de-ce956a791eba", 00:24:00.080 "is_configured": true, 00:24:00.080 "data_offset": 2048, 00:24:00.080 "data_size": 63488 00:24:00.080 } 00:24:00.080 ] 00:24:00.080 }' 00:24:00.080 13:47:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.080 13:47:39 -- common/autotest_common.sh@10 -- # set +x 00:24:01.013 13:47:40 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:01.013 13:47:40 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:01.013 [2024-07-10 13:47:40.249254] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.013 13:47:40 -- bdev/bdev_raid.sh@506 -- # '[' f9cb1daa-8a8f-4f9e-9811-8c0973d8841a '!=' f9cb1daa-8a8f-4f9e-9811-8c0973d8841a ']' 00:24:01.013 13:47:40 -- bdev/bdev_raid.sh@511 -- # killprocess 131330 00:24:01.013 13:47:40 -- common/autotest_common.sh@926 -- # '[' -z 131330 ']' 00:24:01.013 13:47:40 -- common/autotest_common.sh@930 -- # kill -0 131330 00:24:01.013 13:47:40 -- common/autotest_common.sh@931 -- # uname 00:24:01.013 13:47:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:01.013 13:47:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131330 00:24:01.013 killing process with pid 131330 00:24:01.013 13:47:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:01.013 13:47:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:01.013 13:47:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131330' 00:24:01.013 13:47:40 -- common/autotest_common.sh@945 -- # kill 131330 00:24:01.013 13:47:40 -- common/autotest_common.sh@950 -- # wait 131330 00:24:01.013 [2024-07-10 13:47:40.300657] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:01.014 [2024-07-10 13:47:40.300747] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.014 [2024-07-10 13:47:40.300812] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.014 [2024-07-10 13:47:40.300823] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:24:01.580 [2024-07-10 13:47:40.634355] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@513 -- # return 0 00:24:02.958 00:24:02.958 real 0m20.526s 00:24:02.958 user 0m37.369s 00:24:02.958 sys 0m2.316s 00:24:02.958 13:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.958 13:47:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.958 ************************************ 00:24:02.958 END TEST raid5f_superblock_test 00:24:02.958 ************************************ 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:24:02.958 13:47:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:02.958 13:47:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.958 13:47:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.958 ************************************ 00:24:02.958 START TEST raid5f_rebuild_test 00:24:02.958 ************************************ 00:24:02.958 13:47:42 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@544 -- # raid_pid=131978 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131978 /var/tmp/spdk-raid.sock 00:24:02.958 13:47:42 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:02.958 13:47:42 -- common/autotest_common.sh@819 -- # '[' -z 131978 ']' 00:24:02.958 13:47:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:02.958 13:47:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:02.958 13:47:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:02.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:02.958 13:47:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:02.958 13:47:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.958 [2024-07-10 13:47:42.195770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:02.958 [2024-07-10 13:47:42.196404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131978 ] 00:24:02.958 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:02.958 Zero copy mechanism will not be used. 00:24:03.217 [2024-07-10 13:47:42.359074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.476 [2024-07-10 13:47:42.576581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.476 [2024-07-10 13:47:42.800778] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.735 13:47:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:03.735 13:47:43 -- common/autotest_common.sh@852 -- # return 0 00:24:03.735 13:47:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:03.735 13:47:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:03.735 13:47:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:04.056 BaseBdev1 00:24:04.056 13:47:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:04.056 13:47:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:04.056 13:47:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:04.315 BaseBdev2 00:24:04.315 13:47:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:04.315 13:47:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:04.315 13:47:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:04.574 BaseBdev3 00:24:04.574 13:47:43 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:04.832 spare_malloc 00:24:04.832 13:47:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:05.091 spare_delay 00:24:05.091 13:47:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:05.349 [2024-07-10 13:47:44.586673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.349 [2024-07-10 13:47:44.586790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.349 [2024-07-10 13:47:44.586822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:05.349 [2024-07-10 13:47:44.586861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.349 spare 00:24:05.349 [2024-07-10 13:47:44.589177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.349 [2024-07-10 13:47:44.589239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.349 13:47:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:05.607 [2024-07-10 13:47:44.806314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.607 [2024-07-10 13:47:44.808106] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:05.607 [2024-07-10 13:47:44.808163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:05.607 [2024-07-10 13:47:44.808266] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:24:05.607 [2024-07-10 13:47:44.808282] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:05.607 [2024-07-10 13:47:44.808421] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:05.607 [2024-07-10 13:47:44.814251] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:24:05.607 [2024-07-10 13:47:44.814284] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:24:05.607 [2024-07-10 13:47:44.814543] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.607 13:47:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.865 13:47:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:05.865 "name": "raid_bdev1", 00:24:05.865 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:05.865 "strip_size_kb": 64, 00:24:05.865 "state": "online", 00:24:05.865 "raid_level": "raid5f", 00:24:05.865 "superblock": false, 00:24:05.865 "num_base_bdevs": 3, 00:24:05.865 "num_base_bdevs_discovered": 3, 00:24:05.865 "num_base_bdevs_operational": 3, 00:24:05.865 "base_bdevs_list": [ 00:24:05.865 { 00:24:05.865 "name": "BaseBdev1", 00:24:05.865 "uuid": "41f456ab-f63d-448f-9a67-3a8173300258", 00:24:05.865 "is_configured": true, 00:24:05.865 "data_offset": 0, 00:24:05.865 "data_size": 65536 00:24:05.865 }, 00:24:05.865 { 00:24:05.865 "name": "BaseBdev2", 00:24:05.865 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:05.865 "is_configured": true, 00:24:05.865 "data_offset": 0, 00:24:05.865 "data_size": 65536 00:24:05.865 }, 00:24:05.865 { 00:24:05.865 "name": "BaseBdev3", 00:24:05.865 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:05.865 "is_configured": true, 00:24:05.865 "data_offset": 0, 00:24:05.865 "data_size": 65536 00:24:05.865 } 00:24:05.865 ] 00:24:05.865 }' 00:24:05.865 13:47:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:05.865 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:24:06.429 13:47:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:06.429 13:47:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:06.686 [2024-07-10 13:47:45.944454] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.686 13:47:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:24:06.686 13:47:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.686 13:47:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:06.944 13:47:46 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:06.944 13:47:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:06.944 13:47:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:06.944 13:47:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:06.944 13:47:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:06.944 13:47:46 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@12 -- # local i 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:06.945 13:47:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:07.203 [2024-07-10 13:47:46.407722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:07.203 /dev/nbd0 00:24:07.203 13:47:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:07.203 13:47:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:07.203 13:47:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:07.203 13:47:46 -- common/autotest_common.sh@857 -- # local i 00:24:07.203 13:47:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:07.203 13:47:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:07.203 13:47:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:07.203 13:47:46 -- common/autotest_common.sh@861 -- # break 00:24:07.203 13:47:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:07.203 13:47:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:07.203 13:47:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:07.203 1+0 records in 00:24:07.203 1+0 records out 00:24:07.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323444 s, 12.7 MB/s 00:24:07.203 13:47:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.203 13:47:46 -- common/autotest_common.sh@874 -- # size=4096 00:24:07.203 13:47:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:07.203 13:47:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:07.203 13:47:46 -- common/autotest_common.sh@877 -- # return 0 00:24:07.203 13:47:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:07.203 13:47:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:07.203 13:47:46 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:07.203 13:47:46 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:07.203 13:47:46 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:07.203 13:47:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:24:07.771 512+0 records in 00:24:07.771 512+0 records out 00:24:07.771 67108864 bytes (67 MB, 64 MiB) copied, 0.416089 s, 161 MB/s 00:24:07.771 13:47:46 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:07.771 13:47:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:07.771 13:47:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:07.771 13:47:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:07.771 13:47:46 -- bdev/nbd_common.sh@51 -- # local i 00:24:07.771 13:47:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:07.771 13:47:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:07.771 [2024-07-10 13:47:47.117712] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@41 -- # break 00:24:07.771 13:47:47 -- bdev/nbd_common.sh@45 -- # return 0 00:24:07.771 13:47:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:08.028 [2024-07-10 13:47:47.316705] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.028 13:47:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.287 13:47:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:08.287 "name": "raid_bdev1", 00:24:08.287 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:08.287 "strip_size_kb": 64, 00:24:08.287 "state": "online", 00:24:08.287 "raid_level": "raid5f", 00:24:08.287 "superblock": false, 00:24:08.287 "num_base_bdevs": 3, 00:24:08.287 "num_base_bdevs_discovered": 2, 00:24:08.287 "num_base_bdevs_operational": 2, 00:24:08.287 "base_bdevs_list": [ 00:24:08.287 { 00:24:08.287 "name": null, 00:24:08.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.287 "is_configured": false, 00:24:08.287 "data_offset": 0, 00:24:08.287 "data_size": 65536 00:24:08.287 }, 00:24:08.287 { 00:24:08.287 "name": "BaseBdev2", 00:24:08.287 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:08.287 "is_configured": true, 00:24:08.287 "data_offset": 0, 00:24:08.287 "data_size": 65536 00:24:08.287 }, 00:24:08.287 { 00:24:08.287 "name": "BaseBdev3", 00:24:08.287 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:08.287 "is_configured": true, 00:24:08.287 "data_offset": 0, 00:24:08.287 "data_size": 65536 00:24:08.287 } 00:24:08.287 ] 00:24:08.287 }' 00:24:08.287 13:47:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:08.287 13:47:47 -- common/autotest_common.sh@10 -- # set +x 00:24:08.853 13:47:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:09.118 [2024-07-10 13:47:48.330961] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:09.118 [2024-07-10 13:47:48.331024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:09.118 [2024-07-10 13:47:48.348254] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:24:09.118 [2024-07-10 13:47:48.355592] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:09.118 13:47:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:10.064 13:47:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.065 13:47:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.065 13:47:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:10.065 13:47:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:10.065 13:47:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.065 13:47:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.065 13:47:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.324 13:47:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.324 "name": "raid_bdev1", 00:24:10.324 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:10.324 "strip_size_kb": 64, 00:24:10.324 "state": "online", 00:24:10.324 "raid_level": "raid5f", 00:24:10.324 "superblock": false, 00:24:10.324 "num_base_bdevs": 3, 00:24:10.324 "num_base_bdevs_discovered": 3, 00:24:10.324 "num_base_bdevs_operational": 3, 00:24:10.324 "process": { 00:24:10.324 "type": "rebuild", 00:24:10.324 "target": "spare", 00:24:10.324 "progress": { 00:24:10.324 "blocks": 24576, 00:24:10.324 "percent": 18 00:24:10.324 } 00:24:10.324 }, 00:24:10.324 "base_bdevs_list": [ 00:24:10.324 { 00:24:10.324 "name": "spare", 00:24:10.324 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:10.324 "is_configured": true, 00:24:10.324 "data_offset": 0, 00:24:10.324 "data_size": 65536 00:24:10.324 }, 00:24:10.324 { 00:24:10.324 "name": "BaseBdev2", 00:24:10.324 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:10.324 "is_configured": true, 00:24:10.324 "data_offset": 0, 00:24:10.324 "data_size": 65536 00:24:10.324 }, 00:24:10.324 { 00:24:10.324 "name": "BaseBdev3", 00:24:10.324 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:10.324 "is_configured": true, 00:24:10.324 "data_offset": 0, 00:24:10.324 "data_size": 65536 00:24:10.324 } 00:24:10.324 ] 00:24:10.324 }' 00:24:10.324 13:47:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.324 13:47:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.324 13:47:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.583 13:47:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.583 13:47:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:10.583 [2024-07-10 13:47:49.914774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.843 [2024-07-10 13:47:49.968869] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:10.843 [2024-07-10 13:47:49.968988] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.843 13:47:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.101 13:47:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:11.101 "name": "raid_bdev1", 00:24:11.101 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:11.101 "strip_size_kb": 64, 00:24:11.101 "state": "online", 00:24:11.101 "raid_level": "raid5f", 00:24:11.101 "superblock": false, 00:24:11.101 "num_base_bdevs": 3, 00:24:11.101 "num_base_bdevs_discovered": 2, 00:24:11.101 "num_base_bdevs_operational": 2, 00:24:11.101 "base_bdevs_list": [ 00:24:11.101 { 00:24:11.101 "name": null, 00:24:11.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.101 "is_configured": false, 00:24:11.101 "data_offset": 0, 00:24:11.101 "data_size": 65536 00:24:11.101 }, 00:24:11.101 { 00:24:11.101 "name": "BaseBdev2", 00:24:11.101 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:11.101 "is_configured": true, 00:24:11.101 "data_offset": 0, 00:24:11.101 "data_size": 65536 00:24:11.101 }, 00:24:11.101 { 00:24:11.101 "name": "BaseBdev3", 00:24:11.101 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:11.101 "is_configured": true, 00:24:11.101 "data_offset": 0, 00:24:11.101 "data_size": 65536 00:24:11.101 } 00:24:11.101 ] 00:24:11.101 }' 00:24:11.101 13:47:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:11.101 13:47:50 -- common/autotest_common.sh@10 -- # set +x 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.668 13:47:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.925 13:47:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.925 "name": "raid_bdev1", 00:24:11.925 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:11.925 "strip_size_kb": 64, 00:24:11.925 "state": "online", 00:24:11.926 "raid_level": "raid5f", 00:24:11.926 "superblock": false, 00:24:11.926 "num_base_bdevs": 3, 00:24:11.926 "num_base_bdevs_discovered": 2, 00:24:11.926 "num_base_bdevs_operational": 2, 00:24:11.926 "base_bdevs_list": [ 00:24:11.926 { 00:24:11.926 "name": null, 00:24:11.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.926 "is_configured": false, 00:24:11.926 "data_offset": 0, 00:24:11.926 "data_size": 65536 00:24:11.926 }, 00:24:11.926 { 00:24:11.926 "name": "BaseBdev2", 00:24:11.926 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:11.926 "is_configured": true, 00:24:11.926 "data_offset": 0, 00:24:11.926 "data_size": 65536 00:24:11.926 }, 00:24:11.926 { 00:24:11.926 "name": "BaseBdev3", 00:24:11.926 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:11.926 "is_configured": true, 00:24:11.926 "data_offset": 0, 00:24:11.926 "data_size": 65536 00:24:11.926 } 00:24:11.926 ] 00:24:11.926 }' 00:24:11.926 13:47:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.926 13:47:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:11.926 13:47:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.926 13:47:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:11.926 13:47:51 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:12.184 [2024-07-10 13:47:51.437149] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:12.184 [2024-07-10 13:47:51.437210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.184 [2024-07-10 13:47:51.455051] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:24:12.184 [2024-07-10 13:47:51.463662] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:12.184 13:47:51 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.559 "name": "raid_bdev1", 00:24:13.559 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:13.559 "strip_size_kb": 64, 00:24:13.559 "state": "online", 00:24:13.559 "raid_level": "raid5f", 00:24:13.559 "superblock": false, 00:24:13.559 "num_base_bdevs": 3, 00:24:13.559 "num_base_bdevs_discovered": 3, 00:24:13.559 "num_base_bdevs_operational": 3, 00:24:13.559 "process": { 00:24:13.559 "type": "rebuild", 00:24:13.559 "target": "spare", 00:24:13.559 "progress": { 00:24:13.559 "blocks": 22528, 00:24:13.559 "percent": 17 00:24:13.559 } 00:24:13.559 }, 00:24:13.559 "base_bdevs_list": [ 00:24:13.559 { 00:24:13.559 "name": "spare", 00:24:13.559 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:13.559 "is_configured": true, 00:24:13.559 "data_offset": 0, 00:24:13.559 "data_size": 65536 00:24:13.559 }, 00:24:13.559 { 00:24:13.559 "name": "BaseBdev2", 00:24:13.559 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:13.559 "is_configured": true, 00:24:13.559 "data_offset": 0, 00:24:13.559 "data_size": 65536 00:24:13.559 }, 00:24:13.559 { 00:24:13.559 "name": "BaseBdev3", 00:24:13.559 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:13.559 "is_configured": true, 00:24:13.559 "data_offset": 0, 00:24:13.559 "data_size": 65536 00:24:13.559 } 00:24:13.559 ] 00:24:13.559 }' 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@657 -- # local timeout=587 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.559 13:47:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.827 13:47:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.827 "name": "raid_bdev1", 00:24:13.827 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:13.827 "strip_size_kb": 64, 00:24:13.827 "state": "online", 00:24:13.827 "raid_level": "raid5f", 00:24:13.827 "superblock": false, 00:24:13.827 "num_base_bdevs": 3, 00:24:13.827 "num_base_bdevs_discovered": 3, 00:24:13.827 "num_base_bdevs_operational": 3, 00:24:13.827 "process": { 00:24:13.827 "type": "rebuild", 00:24:13.827 "target": "spare", 00:24:13.827 "progress": { 00:24:13.827 "blocks": 30720, 00:24:13.827 "percent": 23 00:24:13.827 } 00:24:13.827 }, 00:24:13.827 "base_bdevs_list": [ 00:24:13.827 { 00:24:13.827 "name": "spare", 00:24:13.827 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:13.827 "is_configured": true, 00:24:13.827 "data_offset": 0, 00:24:13.827 "data_size": 65536 00:24:13.827 }, 00:24:13.827 { 00:24:13.827 "name": "BaseBdev2", 00:24:13.827 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:13.827 "is_configured": true, 00:24:13.827 "data_offset": 0, 00:24:13.827 "data_size": 65536 00:24:13.827 }, 00:24:13.827 { 00:24:13.827 "name": "BaseBdev3", 00:24:13.827 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:13.827 "is_configured": true, 00:24:13.827 "data_offset": 0, 00:24:13.827 "data_size": 65536 00:24:13.827 } 00:24:13.827 ] 00:24:13.827 }' 00:24:13.827 13:47:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.827 13:47:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.827 13:47:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.120 13:47:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.120 13:47:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:15.055 13:47:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:15.055 13:47:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.055 13:47:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:15.055 13:47:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:15.055 13:47:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:15.056 13:47:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:15.056 13:47:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.056 13:47:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.314 13:47:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.314 "name": "raid_bdev1", 00:24:15.314 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:15.314 "strip_size_kb": 64, 00:24:15.314 "state": "online", 00:24:15.314 "raid_level": "raid5f", 00:24:15.314 "superblock": false, 00:24:15.314 "num_base_bdevs": 3, 00:24:15.314 "num_base_bdevs_discovered": 3, 00:24:15.314 "num_base_bdevs_operational": 3, 00:24:15.314 "process": { 00:24:15.314 "type": "rebuild", 00:24:15.314 "target": "spare", 00:24:15.314 "progress": { 00:24:15.314 "blocks": 59392, 00:24:15.314 "percent": 45 00:24:15.314 } 00:24:15.314 }, 00:24:15.314 "base_bdevs_list": [ 00:24:15.314 { 00:24:15.314 "name": "spare", 00:24:15.314 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:15.314 "is_configured": true, 00:24:15.314 "data_offset": 0, 00:24:15.314 "data_size": 65536 00:24:15.314 }, 00:24:15.314 { 00:24:15.314 "name": "BaseBdev2", 00:24:15.314 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:15.314 "is_configured": true, 00:24:15.314 "data_offset": 0, 00:24:15.314 "data_size": 65536 00:24:15.314 }, 00:24:15.314 { 00:24:15.314 "name": "BaseBdev3", 00:24:15.314 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:15.314 "is_configured": true, 00:24:15.314 "data_offset": 0, 00:24:15.314 "data_size": 65536 00:24:15.314 } 00:24:15.314 ] 00:24:15.314 }' 00:24:15.314 13:47:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.314 13:47:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.314 13:47:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.314 13:47:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.314 13:47:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.252 13:47:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.513 13:47:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.513 "name": "raid_bdev1", 00:24:16.513 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:16.513 "strip_size_kb": 64, 00:24:16.513 "state": "online", 00:24:16.513 "raid_level": "raid5f", 00:24:16.513 "superblock": false, 00:24:16.513 "num_base_bdevs": 3, 00:24:16.513 "num_base_bdevs_discovered": 3, 00:24:16.513 "num_base_bdevs_operational": 3, 00:24:16.513 "process": { 00:24:16.514 "type": "rebuild", 00:24:16.514 "target": "spare", 00:24:16.514 "progress": { 00:24:16.514 "blocks": 86016, 00:24:16.514 "percent": 65 00:24:16.514 } 00:24:16.514 }, 00:24:16.514 "base_bdevs_list": [ 00:24:16.514 { 00:24:16.514 "name": "spare", 00:24:16.514 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:16.514 "is_configured": true, 00:24:16.514 "data_offset": 0, 00:24:16.514 "data_size": 65536 00:24:16.514 }, 00:24:16.514 { 00:24:16.514 "name": "BaseBdev2", 00:24:16.514 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:16.514 "is_configured": true, 00:24:16.514 "data_offset": 0, 00:24:16.514 "data_size": 65536 00:24:16.514 }, 00:24:16.514 { 00:24:16.514 "name": "BaseBdev3", 00:24:16.514 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:16.514 "is_configured": true, 00:24:16.514 "data_offset": 0, 00:24:16.514 "data_size": 65536 00:24:16.514 } 00:24:16.514 ] 00:24:16.514 }' 00:24:16.514 13:47:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.514 13:47:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.514 13:47:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.774 13:47:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.774 13:47:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.712 13:47:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.969 13:47:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.969 "name": "raid_bdev1", 00:24:17.969 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:17.969 "strip_size_kb": 64, 00:24:17.970 "state": "online", 00:24:17.970 "raid_level": "raid5f", 00:24:17.970 "superblock": false, 00:24:17.970 "num_base_bdevs": 3, 00:24:17.970 "num_base_bdevs_discovered": 3, 00:24:17.970 "num_base_bdevs_operational": 3, 00:24:17.970 "process": { 00:24:17.970 "type": "rebuild", 00:24:17.970 "target": "spare", 00:24:17.970 "progress": { 00:24:17.970 "blocks": 114688, 00:24:17.970 "percent": 87 00:24:17.970 } 00:24:17.970 }, 00:24:17.970 "base_bdevs_list": [ 00:24:17.970 { 00:24:17.970 "name": "spare", 00:24:17.970 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:17.970 "is_configured": true, 00:24:17.970 "data_offset": 0, 00:24:17.970 "data_size": 65536 00:24:17.970 }, 00:24:17.970 { 00:24:17.970 "name": "BaseBdev2", 00:24:17.970 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:17.970 "is_configured": true, 00:24:17.970 "data_offset": 0, 00:24:17.970 "data_size": 65536 00:24:17.970 }, 00:24:17.970 { 00:24:17.970 "name": "BaseBdev3", 00:24:17.970 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:17.970 "is_configured": true, 00:24:17.970 "data_offset": 0, 00:24:17.970 "data_size": 65536 00:24:17.970 } 00:24:17.970 ] 00:24:17.970 }' 00:24:17.970 13:47:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.970 13:47:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.970 13:47:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.970 13:47:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.970 13:47:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:18.576 [2024-07-10 13:47:57.919589] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:18.576 [2024-07-10 13:47:57.919687] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:18.576 [2024-07-10 13:47:57.919810] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:19.141 "name": "raid_bdev1", 00:24:19.141 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:19.141 "strip_size_kb": 64, 00:24:19.141 "state": "online", 00:24:19.141 "raid_level": "raid5f", 00:24:19.141 "superblock": false, 00:24:19.141 "num_base_bdevs": 3, 00:24:19.141 "num_base_bdevs_discovered": 3, 00:24:19.141 "num_base_bdevs_operational": 3, 00:24:19.141 "base_bdevs_list": [ 00:24:19.141 { 00:24:19.141 "name": "spare", 00:24:19.141 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 }, 00:24:19.141 { 00:24:19.141 "name": "BaseBdev2", 00:24:19.141 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 }, 00:24:19.141 { 00:24:19.141 "name": "BaseBdev3", 00:24:19.141 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:19.141 "is_configured": true, 00:24:19.141 "data_offset": 0, 00:24:19.141 "data_size": 65536 00:24:19.141 } 00:24:19.141 ] 00:24:19.141 }' 00:24:19.141 13:47:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@660 -- # break 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.400 13:47:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.659 13:47:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:19.659 "name": "raid_bdev1", 00:24:19.659 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:19.659 "strip_size_kb": 64, 00:24:19.659 "state": "online", 00:24:19.659 "raid_level": "raid5f", 00:24:19.659 "superblock": false, 00:24:19.659 "num_base_bdevs": 3, 00:24:19.659 "num_base_bdevs_discovered": 3, 00:24:19.659 "num_base_bdevs_operational": 3, 00:24:19.659 "base_bdevs_list": [ 00:24:19.659 { 00:24:19.659 "name": "spare", 00:24:19.659 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:19.659 "is_configured": true, 00:24:19.659 "data_offset": 0, 00:24:19.659 "data_size": 65536 00:24:19.659 }, 00:24:19.659 { 00:24:19.660 "name": "BaseBdev2", 00:24:19.660 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:19.660 "is_configured": true, 00:24:19.660 "data_offset": 0, 00:24:19.660 "data_size": 65536 00:24:19.660 }, 00:24:19.660 { 00:24:19.660 "name": "BaseBdev3", 00:24:19.660 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:19.660 "is_configured": true, 00:24:19.660 "data_offset": 0, 00:24:19.660 "data_size": 65536 00:24:19.660 } 00:24:19.660 ] 00:24:19.660 }' 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.660 13:47:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.918 13:47:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.918 "name": "raid_bdev1", 00:24:19.918 "uuid": "f93ff3e8-5e1c-43f8-b837-b18b54aaffca", 00:24:19.918 "strip_size_kb": 64, 00:24:19.918 "state": "online", 00:24:19.918 "raid_level": "raid5f", 00:24:19.918 "superblock": false, 00:24:19.918 "num_base_bdevs": 3, 00:24:19.918 "num_base_bdevs_discovered": 3, 00:24:19.918 "num_base_bdevs_operational": 3, 00:24:19.918 "base_bdevs_list": [ 00:24:19.918 { 00:24:19.918 "name": "spare", 00:24:19.918 "uuid": "20cb9875-4d5a-59fc-9fa7-9adedbe05380", 00:24:19.918 "is_configured": true, 00:24:19.918 "data_offset": 0, 00:24:19.918 "data_size": 65536 00:24:19.918 }, 00:24:19.918 { 00:24:19.918 "name": "BaseBdev2", 00:24:19.918 "uuid": "4b5efa19-cf37-40fe-bc6d-197fb68e6f3e", 00:24:19.918 "is_configured": true, 00:24:19.918 "data_offset": 0, 00:24:19.918 "data_size": 65536 00:24:19.918 }, 00:24:19.918 { 00:24:19.918 "name": "BaseBdev3", 00:24:19.918 "uuid": "13aa5608-4df5-41d6-bd4e-fa4cdda4552a", 00:24:19.918 "is_configured": true, 00:24:19.918 "data_offset": 0, 00:24:19.918 "data_size": 65536 00:24:19.918 } 00:24:19.918 ] 00:24:19.918 }' 00:24:19.918 13:47:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.918 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:24:20.484 13:47:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:20.743 [2024-07-10 13:48:00.012370] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:20.743 [2024-07-10 13:48:00.012416] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:20.743 [2024-07-10 13:48:00.012512] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:20.743 [2024-07-10 13:48:00.012584] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:20.743 [2024-07-10 13:48:00.012593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:24:20.743 13:48:00 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.743 13:48:00 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:21.001 13:48:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:21.001 13:48:00 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:21.001 13:48:00 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@12 -- # local i 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.001 13:48:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:21.260 /dev/nbd0 00:24:21.260 13:48:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:21.260 13:48:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:21.260 13:48:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:21.260 13:48:00 -- common/autotest_common.sh@857 -- # local i 00:24:21.260 13:48:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:21.260 13:48:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:21.260 13:48:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:21.260 13:48:00 -- common/autotest_common.sh@861 -- # break 00:24:21.260 13:48:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:21.260 13:48:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:21.260 13:48:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:21.260 1+0 records in 00:24:21.260 1+0 records out 00:24:21.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021002 s, 19.5 MB/s 00:24:21.261 13:48:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.261 13:48:00 -- common/autotest_common.sh@874 -- # size=4096 00:24:21.261 13:48:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.261 13:48:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:21.261 13:48:00 -- common/autotest_common.sh@877 -- # return 0 00:24:21.261 13:48:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:21.261 13:48:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.261 13:48:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:21.518 /dev/nbd1 00:24:21.518 13:48:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:21.518 13:48:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:21.518 13:48:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:21.518 13:48:00 -- common/autotest_common.sh@857 -- # local i 00:24:21.518 13:48:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:21.518 13:48:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:21.518 13:48:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:21.518 13:48:00 -- common/autotest_common.sh@861 -- # break 00:24:21.518 13:48:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:21.518 13:48:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:21.518 13:48:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:21.518 1+0 records in 00:24:21.518 1+0 records out 00:24:21.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436359 s, 9.4 MB/s 00:24:21.518 13:48:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.518 13:48:00 -- common/autotest_common.sh@874 -- # size=4096 00:24:21.518 13:48:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:21.518 13:48:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:21.518 13:48:00 -- common/autotest_common.sh@877 -- # return 0 00:24:21.518 13:48:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:21.518 13:48:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:21.518 13:48:00 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:21.777 13:48:00 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:21.777 13:48:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:21.777 13:48:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:21.777 13:48:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:21.777 13:48:00 -- bdev/nbd_common.sh@51 -- # local i 00:24:21.777 13:48:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:21.777 13:48:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@41 -- # break 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:22.035 13:48:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@41 -- # break 00:24:22.292 13:48:01 -- bdev/nbd_common.sh@45 -- # return 0 00:24:22.292 13:48:01 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:22.292 13:48:01 -- bdev/bdev_raid.sh@709 -- # killprocess 131978 00:24:22.292 13:48:01 -- common/autotest_common.sh@926 -- # '[' -z 131978 ']' 00:24:22.292 13:48:01 -- common/autotest_common.sh@930 -- # kill -0 131978 00:24:22.292 13:48:01 -- common/autotest_common.sh@931 -- # uname 00:24:22.292 13:48:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:22.292 13:48:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131978 00:24:22.292 13:48:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:22.292 13:48:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:22.292 13:48:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131978' 00:24:22.292 killing process with pid 131978 00:24:22.292 13:48:01 -- common/autotest_common.sh@945 -- # kill 131978 00:24:22.292 Received shutdown signal, test time was about 60.000000 seconds 00:24:22.292 00:24:22.292 Latency(us) 00:24:22.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.292 =================================================================================================================== 00:24:22.292 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.292 [2024-07-10 13:48:01.586487] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:22.292 13:48:01 -- common/autotest_common.sh@950 -- # wait 131978 00:24:22.863 [2024-07-10 13:48:01.990993] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.234 ************************************ 00:24:24.234 END TEST raid5f_rebuild_test 00:24:24.234 ************************************ 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:24.234 00:24:24.234 real 0m21.174s 00:24:24.234 user 0m31.834s 00:24:24.234 sys 0m2.280s 00:24:24.234 13:48:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.234 13:48:03 -- common/autotest_common.sh@10 -- # set +x 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:24:24.234 13:48:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:24.234 13:48:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:24.234 13:48:03 -- common/autotest_common.sh@10 -- # set +x 00:24:24.234 ************************************ 00:24:24.234 START TEST raid5f_rebuild_test_sb 00:24:24.234 ************************************ 00:24:24.234 13:48:03 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@544 -- # raid_pid=132570 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132570 /var/tmp/spdk-raid.sock 00:24:24.234 13:48:03 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:24.234 13:48:03 -- common/autotest_common.sh@819 -- # '[' -z 132570 ']' 00:24:24.234 13:48:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:24.234 13:48:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:24.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:24.234 13:48:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:24.234 13:48:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:24.234 13:48:03 -- common/autotest_common.sh@10 -- # set +x 00:24:24.234 [2024-07-10 13:48:03.446825] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:24.234 [2024-07-10 13:48:03.447062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132570 ] 00:24:24.234 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:24.234 Zero copy mechanism will not be used. 00:24:24.492 [2024-07-10 13:48:03.612397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.492 [2024-07-10 13:48:03.809112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.749 [2024-07-10 13:48:04.009812] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.008 13:48:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:25.008 13:48:04 -- common/autotest_common.sh@852 -- # return 0 00:24:25.008 13:48:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:25.008 13:48:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:25.008 13:48:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:25.285 BaseBdev1_malloc 00:24:25.285 13:48:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:25.542 [2024-07-10 13:48:04.724168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:25.542 [2024-07-10 13:48:04.724268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.542 [2024-07-10 13:48:04.724295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:25.542 [2024-07-10 13:48:04.724331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.542 [2024-07-10 13:48:04.726426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.542 [2024-07-10 13:48:04.726473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:25.542 BaseBdev1 00:24:25.542 13:48:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:25.542 13:48:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:25.542 13:48:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:25.800 BaseBdev2_malloc 00:24:25.800 13:48:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:26.059 [2024-07-10 13:48:05.188320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:26.059 [2024-07-10 13:48:05.188415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.059 [2024-07-10 13:48:05.188449] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:26.059 [2024-07-10 13:48:05.188488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.059 [2024-07-10 13:48:05.190552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.059 [2024-07-10 13:48:05.190594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:26.059 BaseBdev2 00:24:26.059 13:48:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:26.059 13:48:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:26.059 13:48:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:26.316 BaseBdev3_malloc 00:24:26.316 13:48:05 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:26.316 [2024-07-10 13:48:05.627364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:26.316 [2024-07-10 13:48:05.627451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.316 [2024-07-10 13:48:05.627486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:26.316 [2024-07-10 13:48:05.627518] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.316 [2024-07-10 13:48:05.629460] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.316 [2024-07-10 13:48:05.629514] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:26.316 BaseBdev3 00:24:26.316 13:48:05 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:26.574 spare_malloc 00:24:26.574 13:48:05 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:26.831 spare_delay 00:24:26.832 13:48:06 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:27.098 [2024-07-10 13:48:06.266112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:27.098 [2024-07-10 13:48:06.266213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.098 [2024-07-10 13:48:06.266244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:27.098 [2024-07-10 13:48:06.266275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.098 [2024-07-10 13:48:06.268402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.098 [2024-07-10 13:48:06.268457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:27.098 spare 00:24:27.098 13:48:06 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:27.357 [2024-07-10 13:48:06.461861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.357 [2024-07-10 13:48:06.463651] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:27.357 [2024-07-10 13:48:06.463721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:27.357 [2024-07-10 13:48:06.463924] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:24:27.357 [2024-07-10 13:48:06.463942] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:27.357 [2024-07-10 13:48:06.464078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:27.357 [2024-07-10 13:48:06.469885] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:24:27.357 [2024-07-10 13:48:06.469917] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:24:27.357 [2024-07-10 13:48:06.470132] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:27.357 13:48:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:27.358 13:48:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.358 13:48:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.358 13:48:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.358 "name": "raid_bdev1", 00:24:27.358 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:27.358 "strip_size_kb": 64, 00:24:27.358 "state": "online", 00:24:27.358 "raid_level": "raid5f", 00:24:27.358 "superblock": true, 00:24:27.358 "num_base_bdevs": 3, 00:24:27.358 "num_base_bdevs_discovered": 3, 00:24:27.358 "num_base_bdevs_operational": 3, 00:24:27.358 "base_bdevs_list": [ 00:24:27.358 { 00:24:27.358 "name": "BaseBdev1", 00:24:27.358 "uuid": "376a8004-9f40-586d-8954-e73dd26cd583", 00:24:27.358 "is_configured": true, 00:24:27.358 "data_offset": 2048, 00:24:27.358 "data_size": 63488 00:24:27.358 }, 00:24:27.358 { 00:24:27.358 "name": "BaseBdev2", 00:24:27.358 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:27.358 "is_configured": true, 00:24:27.358 "data_offset": 2048, 00:24:27.358 "data_size": 63488 00:24:27.358 }, 00:24:27.358 { 00:24:27.358 "name": "BaseBdev3", 00:24:27.358 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:27.358 "is_configured": true, 00:24:27.358 "data_offset": 2048, 00:24:27.358 "data_size": 63488 00:24:27.358 } 00:24:27.358 ] 00:24:27.358 }' 00:24:27.358 13:48:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.358 13:48:06 -- common/autotest_common.sh@10 -- # set +x 00:24:28.294 13:48:07 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:28.294 13:48:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:28.295 [2024-07-10 13:48:07.523487] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.295 13:48:07 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:24:28.295 13:48:07 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.295 13:48:07 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:28.553 13:48:07 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:28.553 13:48:07 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:28.553 13:48:07 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:28.553 13:48:07 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@12 -- # local i 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:28.553 13:48:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:28.553 [2024-07-10 13:48:07.902704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:28.812 /dev/nbd0 00:24:28.812 13:48:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:28.812 13:48:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:28.812 13:48:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:28.812 13:48:07 -- common/autotest_common.sh@857 -- # local i 00:24:28.812 13:48:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:28.812 13:48:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:28.812 13:48:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:28.812 13:48:07 -- common/autotest_common.sh@861 -- # break 00:24:28.812 13:48:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:28.812 13:48:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:28.812 13:48:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:28.812 1+0 records in 00:24:28.812 1+0 records out 00:24:28.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379668 s, 10.8 MB/s 00:24:28.812 13:48:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.812 13:48:07 -- common/autotest_common.sh@874 -- # size=4096 00:24:28.812 13:48:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.812 13:48:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:28.812 13:48:07 -- common/autotest_common.sh@877 -- # return 0 00:24:28.812 13:48:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.812 13:48:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:28.812 13:48:07 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:28.812 13:48:07 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:28.812 13:48:07 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:28.812 13:48:07 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:24:29.072 496+0 records in 00:24:29.072 496+0 records out 00:24:29.072 65011712 bytes (65 MB, 62 MiB) copied, 0.421069 s, 154 MB/s 00:24:29.072 13:48:08 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:29.072 13:48:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:29.072 13:48:08 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:29.072 13:48:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.072 13:48:08 -- bdev/nbd_common.sh@51 -- # local i 00:24:29.072 13:48:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.072 13:48:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.330 [2024-07-10 13:48:08.622630] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@41 -- # break 00:24:29.330 13:48:08 -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.330 13:48:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:29.590 [2024-07-10 13:48:08.813456] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.590 13:48:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.850 13:48:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:29.850 "name": "raid_bdev1", 00:24:29.850 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:29.850 "strip_size_kb": 64, 00:24:29.850 "state": "online", 00:24:29.850 "raid_level": "raid5f", 00:24:29.850 "superblock": true, 00:24:29.850 "num_base_bdevs": 3, 00:24:29.850 "num_base_bdevs_discovered": 2, 00:24:29.850 "num_base_bdevs_operational": 2, 00:24:29.850 "base_bdevs_list": [ 00:24:29.850 { 00:24:29.850 "name": null, 00:24:29.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.850 "is_configured": false, 00:24:29.850 "data_offset": 2048, 00:24:29.850 "data_size": 63488 00:24:29.850 }, 00:24:29.850 { 00:24:29.850 "name": "BaseBdev2", 00:24:29.850 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:29.850 "is_configured": true, 00:24:29.850 "data_offset": 2048, 00:24:29.850 "data_size": 63488 00:24:29.850 }, 00:24:29.850 { 00:24:29.850 "name": "BaseBdev3", 00:24:29.850 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:29.850 "is_configured": true, 00:24:29.850 "data_offset": 2048, 00:24:29.850 "data_size": 63488 00:24:29.850 } 00:24:29.850 ] 00:24:29.850 }' 00:24:29.850 13:48:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:29.850 13:48:09 -- common/autotest_common.sh@10 -- # set +x 00:24:30.417 13:48:09 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:30.676 [2024-07-10 13:48:09.899562] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:30.676 [2024-07-10 13:48:09.899642] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:30.676 [2024-07-10 13:48:09.915639] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:24:30.676 [2024-07-10 13:48:09.923120] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:30.676 13:48:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.672 13:48:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.929 13:48:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:31.929 "name": "raid_bdev1", 00:24:31.929 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:31.929 "strip_size_kb": 64, 00:24:31.929 "state": "online", 00:24:31.929 "raid_level": "raid5f", 00:24:31.929 "superblock": true, 00:24:31.929 "num_base_bdevs": 3, 00:24:31.929 "num_base_bdevs_discovered": 3, 00:24:31.929 "num_base_bdevs_operational": 3, 00:24:31.929 "process": { 00:24:31.929 "type": "rebuild", 00:24:31.929 "target": "spare", 00:24:31.929 "progress": { 00:24:31.929 "blocks": 22528, 00:24:31.929 "percent": 17 00:24:31.929 } 00:24:31.929 }, 00:24:31.929 "base_bdevs_list": [ 00:24:31.929 { 00:24:31.929 "name": "spare", 00:24:31.929 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:31.929 "is_configured": true, 00:24:31.929 "data_offset": 2048, 00:24:31.929 "data_size": 63488 00:24:31.929 }, 00:24:31.929 { 00:24:31.929 "name": "BaseBdev2", 00:24:31.929 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:31.929 "is_configured": true, 00:24:31.929 "data_offset": 2048, 00:24:31.929 "data_size": 63488 00:24:31.929 }, 00:24:31.929 { 00:24:31.929 "name": "BaseBdev3", 00:24:31.929 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:31.929 "is_configured": true, 00:24:31.929 "data_offset": 2048, 00:24:31.929 "data_size": 63488 00:24:31.929 } 00:24:31.929 ] 00:24:31.929 }' 00:24:31.929 13:48:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:31.929 13:48:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:31.930 13:48:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:31.930 13:48:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:31.930 13:48:11 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:32.187 [2024-07-10 13:48:11.435061] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:32.187 [2024-07-10 13:48:11.436107] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:32.187 [2024-07-10 13:48:11.436190] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.187 13:48:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.446 13:48:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.446 "name": "raid_bdev1", 00:24:32.446 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:32.446 "strip_size_kb": 64, 00:24:32.446 "state": "online", 00:24:32.446 "raid_level": "raid5f", 00:24:32.446 "superblock": true, 00:24:32.446 "num_base_bdevs": 3, 00:24:32.446 "num_base_bdevs_discovered": 2, 00:24:32.446 "num_base_bdevs_operational": 2, 00:24:32.446 "base_bdevs_list": [ 00:24:32.446 { 00:24:32.446 "name": null, 00:24:32.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.446 "is_configured": false, 00:24:32.446 "data_offset": 2048, 00:24:32.446 "data_size": 63488 00:24:32.446 }, 00:24:32.446 { 00:24:32.446 "name": "BaseBdev2", 00:24:32.446 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:32.446 "is_configured": true, 00:24:32.446 "data_offset": 2048, 00:24:32.446 "data_size": 63488 00:24:32.446 }, 00:24:32.446 { 00:24:32.446 "name": "BaseBdev3", 00:24:32.446 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:32.446 "is_configured": true, 00:24:32.446 "data_offset": 2048, 00:24:32.446 "data_size": 63488 00:24:32.446 } 00:24:32.446 ] 00:24:32.446 }' 00:24:32.446 13:48:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.446 13:48:11 -- common/autotest_common.sh@10 -- # set +x 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.014 13:48:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.275 13:48:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:33.275 "name": "raid_bdev1", 00:24:33.275 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:33.275 "strip_size_kb": 64, 00:24:33.275 "state": "online", 00:24:33.275 "raid_level": "raid5f", 00:24:33.275 "superblock": true, 00:24:33.275 "num_base_bdevs": 3, 00:24:33.275 "num_base_bdevs_discovered": 2, 00:24:33.275 "num_base_bdevs_operational": 2, 00:24:33.275 "base_bdevs_list": [ 00:24:33.275 { 00:24:33.275 "name": null, 00:24:33.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.275 "is_configured": false, 00:24:33.275 "data_offset": 2048, 00:24:33.275 "data_size": 63488 00:24:33.275 }, 00:24:33.275 { 00:24:33.275 "name": "BaseBdev2", 00:24:33.275 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:33.275 "is_configured": true, 00:24:33.275 "data_offset": 2048, 00:24:33.275 "data_size": 63488 00:24:33.275 }, 00:24:33.275 { 00:24:33.275 "name": "BaseBdev3", 00:24:33.275 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:33.275 "is_configured": true, 00:24:33.275 "data_offset": 2048, 00:24:33.275 "data_size": 63488 00:24:33.275 } 00:24:33.275 ] 00:24:33.275 }' 00:24:33.275 13:48:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:33.275 13:48:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:33.275 13:48:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:33.534 13:48:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:33.534 13:48:12 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:33.534 [2024-07-10 13:48:12.881868] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:33.534 [2024-07-10 13:48:12.881921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:33.791 [2024-07-10 13:48:12.896098] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:24:33.791 [2024-07-10 13:48:12.902900] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:33.791 13:48:12 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.728 13:48:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:34.987 "name": "raid_bdev1", 00:24:34.987 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:34.987 "strip_size_kb": 64, 00:24:34.987 "state": "online", 00:24:34.987 "raid_level": "raid5f", 00:24:34.987 "superblock": true, 00:24:34.987 "num_base_bdevs": 3, 00:24:34.987 "num_base_bdevs_discovered": 3, 00:24:34.987 "num_base_bdevs_operational": 3, 00:24:34.987 "process": { 00:24:34.987 "type": "rebuild", 00:24:34.987 "target": "spare", 00:24:34.987 "progress": { 00:24:34.987 "blocks": 22528, 00:24:34.987 "percent": 17 00:24:34.987 } 00:24:34.987 }, 00:24:34.987 "base_bdevs_list": [ 00:24:34.987 { 00:24:34.987 "name": "spare", 00:24:34.987 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:34.987 "is_configured": true, 00:24:34.987 "data_offset": 2048, 00:24:34.987 "data_size": 63488 00:24:34.987 }, 00:24:34.987 { 00:24:34.987 "name": "BaseBdev2", 00:24:34.987 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:34.987 "is_configured": true, 00:24:34.987 "data_offset": 2048, 00:24:34.987 "data_size": 63488 00:24:34.987 }, 00:24:34.987 { 00:24:34.987 "name": "BaseBdev3", 00:24:34.987 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:34.987 "is_configured": true, 00:24:34.987 "data_offset": 2048, 00:24:34.987 "data_size": 63488 00:24:34.987 } 00:24:34.987 ] 00:24:34.987 }' 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:34.987 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@657 -- # local timeout=609 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.987 13:48:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.247 13:48:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:35.247 "name": "raid_bdev1", 00:24:35.247 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:35.247 "strip_size_kb": 64, 00:24:35.247 "state": "online", 00:24:35.247 "raid_level": "raid5f", 00:24:35.247 "superblock": true, 00:24:35.247 "num_base_bdevs": 3, 00:24:35.247 "num_base_bdevs_discovered": 3, 00:24:35.247 "num_base_bdevs_operational": 3, 00:24:35.247 "process": { 00:24:35.247 "type": "rebuild", 00:24:35.247 "target": "spare", 00:24:35.247 "progress": { 00:24:35.247 "blocks": 30720, 00:24:35.247 "percent": 24 00:24:35.247 } 00:24:35.247 }, 00:24:35.247 "base_bdevs_list": [ 00:24:35.247 { 00:24:35.247 "name": "spare", 00:24:35.247 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:35.247 "is_configured": true, 00:24:35.247 "data_offset": 2048, 00:24:35.247 "data_size": 63488 00:24:35.247 }, 00:24:35.247 { 00:24:35.247 "name": "BaseBdev2", 00:24:35.247 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:35.247 "is_configured": true, 00:24:35.247 "data_offset": 2048, 00:24:35.247 "data_size": 63488 00:24:35.247 }, 00:24:35.247 { 00:24:35.247 "name": "BaseBdev3", 00:24:35.247 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:35.247 "is_configured": true, 00:24:35.247 "data_offset": 2048, 00:24:35.247 "data_size": 63488 00:24:35.247 } 00:24:35.247 ] 00:24:35.247 }' 00:24:35.247 13:48:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:35.247 13:48:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.247 13:48:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:35.247 13:48:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.247 13:48:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:36.676 "name": "raid_bdev1", 00:24:36.676 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:36.676 "strip_size_kb": 64, 00:24:36.676 "state": "online", 00:24:36.676 "raid_level": "raid5f", 00:24:36.676 "superblock": true, 00:24:36.676 "num_base_bdevs": 3, 00:24:36.676 "num_base_bdevs_discovered": 3, 00:24:36.676 "num_base_bdevs_operational": 3, 00:24:36.676 "process": { 00:24:36.676 "type": "rebuild", 00:24:36.676 "target": "spare", 00:24:36.676 "progress": { 00:24:36.676 "blocks": 57344, 00:24:36.676 "percent": 45 00:24:36.676 } 00:24:36.676 }, 00:24:36.676 "base_bdevs_list": [ 00:24:36.676 { 00:24:36.676 "name": "spare", 00:24:36.676 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:36.676 "is_configured": true, 00:24:36.676 "data_offset": 2048, 00:24:36.676 "data_size": 63488 00:24:36.676 }, 00:24:36.676 { 00:24:36.676 "name": "BaseBdev2", 00:24:36.676 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:36.676 "is_configured": true, 00:24:36.676 "data_offset": 2048, 00:24:36.676 "data_size": 63488 00:24:36.676 }, 00:24:36.676 { 00:24:36.676 "name": "BaseBdev3", 00:24:36.676 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:36.676 "is_configured": true, 00:24:36.676 "data_offset": 2048, 00:24:36.676 "data_size": 63488 00:24:36.676 } 00:24:36.676 ] 00:24:36.676 }' 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:36.676 13:48:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.615 13:48:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.876 13:48:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:37.876 "name": "raid_bdev1", 00:24:37.876 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:37.876 "strip_size_kb": 64, 00:24:37.876 "state": "online", 00:24:37.876 "raid_level": "raid5f", 00:24:37.876 "superblock": true, 00:24:37.876 "num_base_bdevs": 3, 00:24:37.876 "num_base_bdevs_discovered": 3, 00:24:37.876 "num_base_bdevs_operational": 3, 00:24:37.876 "process": { 00:24:37.876 "type": "rebuild", 00:24:37.876 "target": "spare", 00:24:37.876 "progress": { 00:24:37.876 "blocks": 83968, 00:24:37.876 "percent": 66 00:24:37.876 } 00:24:37.876 }, 00:24:37.876 "base_bdevs_list": [ 00:24:37.876 { 00:24:37.876 "name": "spare", 00:24:37.876 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:37.876 "is_configured": true, 00:24:37.876 "data_offset": 2048, 00:24:37.876 "data_size": 63488 00:24:37.876 }, 00:24:37.876 { 00:24:37.876 "name": "BaseBdev2", 00:24:37.876 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:37.876 "is_configured": true, 00:24:37.876 "data_offset": 2048, 00:24:37.876 "data_size": 63488 00:24:37.876 }, 00:24:37.876 { 00:24:37.876 "name": "BaseBdev3", 00:24:37.876 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:37.876 "is_configured": true, 00:24:37.876 "data_offset": 2048, 00:24:37.876 "data_size": 63488 00:24:37.876 } 00:24:37.876 ] 00:24:37.876 }' 00:24:37.876 13:48:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:37.876 13:48:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.876 13:48:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:38.136 13:48:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.136 13:48:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.075 13:48:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.335 13:48:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:39.335 "name": "raid_bdev1", 00:24:39.335 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:39.335 "strip_size_kb": 64, 00:24:39.335 "state": "online", 00:24:39.335 "raid_level": "raid5f", 00:24:39.335 "superblock": true, 00:24:39.335 "num_base_bdevs": 3, 00:24:39.335 "num_base_bdevs_discovered": 3, 00:24:39.335 "num_base_bdevs_operational": 3, 00:24:39.335 "process": { 00:24:39.335 "type": "rebuild", 00:24:39.335 "target": "spare", 00:24:39.335 "progress": { 00:24:39.335 "blocks": 110592, 00:24:39.335 "percent": 87 00:24:39.335 } 00:24:39.335 }, 00:24:39.335 "base_bdevs_list": [ 00:24:39.335 { 00:24:39.335 "name": "spare", 00:24:39.335 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:39.335 "is_configured": true, 00:24:39.335 "data_offset": 2048, 00:24:39.335 "data_size": 63488 00:24:39.335 }, 00:24:39.335 { 00:24:39.335 "name": "BaseBdev2", 00:24:39.335 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:39.335 "is_configured": true, 00:24:39.335 "data_offset": 2048, 00:24:39.335 "data_size": 63488 00:24:39.335 }, 00:24:39.335 { 00:24:39.335 "name": "BaseBdev3", 00:24:39.335 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:39.335 "is_configured": true, 00:24:39.335 "data_offset": 2048, 00:24:39.335 "data_size": 63488 00:24:39.335 } 00:24:39.335 ] 00:24:39.335 }' 00:24:39.335 13:48:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:39.335 13:48:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.335 13:48:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:39.335 13:48:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.335 13:48:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:39.903 [2024-07-10 13:48:19.153241] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:39.903 [2024-07-10 13:48:19.153374] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:39.903 [2024-07-10 13:48:19.153567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.473 "name": "raid_bdev1", 00:24:40.473 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:40.473 "strip_size_kb": 64, 00:24:40.473 "state": "online", 00:24:40.473 "raid_level": "raid5f", 00:24:40.473 "superblock": true, 00:24:40.473 "num_base_bdevs": 3, 00:24:40.473 "num_base_bdevs_discovered": 3, 00:24:40.473 "num_base_bdevs_operational": 3, 00:24:40.473 "base_bdevs_list": [ 00:24:40.473 { 00:24:40.473 "name": "spare", 00:24:40.473 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:40.473 "is_configured": true, 00:24:40.473 "data_offset": 2048, 00:24:40.473 "data_size": 63488 00:24:40.473 }, 00:24:40.473 { 00:24:40.473 "name": "BaseBdev2", 00:24:40.473 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:40.473 "is_configured": true, 00:24:40.473 "data_offset": 2048, 00:24:40.473 "data_size": 63488 00:24:40.473 }, 00:24:40.473 { 00:24:40.473 "name": "BaseBdev3", 00:24:40.473 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:40.473 "is_configured": true, 00:24:40.473 "data_offset": 2048, 00:24:40.473 "data_size": 63488 00:24:40.473 } 00:24:40.473 ] 00:24:40.473 }' 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:40.473 13:48:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@660 -- # break 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.731 13:48:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.731 13:48:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.731 "name": "raid_bdev1", 00:24:40.731 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:40.731 "strip_size_kb": 64, 00:24:40.731 "state": "online", 00:24:40.731 "raid_level": "raid5f", 00:24:40.731 "superblock": true, 00:24:40.731 "num_base_bdevs": 3, 00:24:40.731 "num_base_bdevs_discovered": 3, 00:24:40.731 "num_base_bdevs_operational": 3, 00:24:40.731 "base_bdevs_list": [ 00:24:40.731 { 00:24:40.731 "name": "spare", 00:24:40.731 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:40.731 "is_configured": true, 00:24:40.731 "data_offset": 2048, 00:24:40.731 "data_size": 63488 00:24:40.731 }, 00:24:40.731 { 00:24:40.731 "name": "BaseBdev2", 00:24:40.731 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:40.731 "is_configured": true, 00:24:40.731 "data_offset": 2048, 00:24:40.731 "data_size": 63488 00:24:40.731 }, 00:24:40.731 { 00:24:40.731 "name": "BaseBdev3", 00:24:40.731 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:40.731 "is_configured": true, 00:24:40.731 "data_offset": 2048, 00:24:40.731 "data_size": 63488 00:24:40.731 } 00:24:40.731 ] 00:24:40.731 }' 00:24:40.731 13:48:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.997 "name": "raid_bdev1", 00:24:40.997 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:40.997 "strip_size_kb": 64, 00:24:40.997 "state": "online", 00:24:40.997 "raid_level": "raid5f", 00:24:40.997 "superblock": true, 00:24:40.997 "num_base_bdevs": 3, 00:24:40.997 "num_base_bdevs_discovered": 3, 00:24:40.997 "num_base_bdevs_operational": 3, 00:24:40.997 "base_bdevs_list": [ 00:24:40.997 { 00:24:40.997 "name": "spare", 00:24:40.997 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:40.997 "is_configured": true, 00:24:40.997 "data_offset": 2048, 00:24:40.997 "data_size": 63488 00:24:40.997 }, 00:24:40.997 { 00:24:40.997 "name": "BaseBdev2", 00:24:40.997 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:40.997 "is_configured": true, 00:24:40.997 "data_offset": 2048, 00:24:40.997 "data_size": 63488 00:24:40.997 }, 00:24:40.997 { 00:24:40.997 "name": "BaseBdev3", 00:24:40.997 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:40.997 "is_configured": true, 00:24:40.997 "data_offset": 2048, 00:24:40.997 "data_size": 63488 00:24:40.997 } 00:24:40.997 ] 00:24:40.997 }' 00:24:40.997 13:48:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.997 13:48:20 -- common/autotest_common.sh@10 -- # set +x 00:24:41.947 13:48:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:41.948 [2024-07-10 13:48:21.146330] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.948 [2024-07-10 13:48:21.146424] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.948 [2024-07-10 13:48:21.146517] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.948 [2024-07-10 13:48:21.146602] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.948 [2024-07-10 13:48:21.146622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:41.948 13:48:21 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.948 13:48:21 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:42.206 13:48:21 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:42.206 13:48:21 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:42.206 13:48:21 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@12 -- # local i 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:42.206 /dev/nbd0 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:42.206 13:48:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:42.206 13:48:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:42.206 13:48:21 -- common/autotest_common.sh@857 -- # local i 00:24:42.206 13:48:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:42.206 13:48:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:42.206 13:48:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:42.206 13:48:21 -- common/autotest_common.sh@861 -- # break 00:24:42.206 13:48:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:42.206 13:48:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:42.206 13:48:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.206 1+0 records in 00:24:42.206 1+0 records out 00:24:42.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042291 s, 9.7 MB/s 00:24:42.206 13:48:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.464 13:48:21 -- common/autotest_common.sh@874 -- # size=4096 00:24:42.464 13:48:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.464 13:48:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:42.464 13:48:21 -- common/autotest_common.sh@877 -- # return 0 00:24:42.464 13:48:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.464 13:48:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.464 13:48:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:42.464 /dev/nbd1 00:24:42.464 13:48:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:42.464 13:48:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:42.464 13:48:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:42.464 13:48:21 -- common/autotest_common.sh@857 -- # local i 00:24:42.464 13:48:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:42.464 13:48:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:42.464 13:48:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:42.464 13:48:21 -- common/autotest_common.sh@861 -- # break 00:24:42.464 13:48:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:42.464 13:48:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:42.464 13:48:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.464 1+0 records in 00:24:42.464 1+0 records out 00:24:42.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261001 s, 15.7 MB/s 00:24:42.464 13:48:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.464 13:48:21 -- common/autotest_common.sh@874 -- # size=4096 00:24:42.464 13:48:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.722 13:48:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:42.722 13:48:21 -- common/autotest_common.sh@877 -- # return 0 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:42.722 13:48:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:42.722 13:48:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@51 -- # local i 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.722 13:48:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@41 -- # break 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.980 13:48:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@41 -- # break 00:24:43.238 13:48:22 -- bdev/nbd_common.sh@45 -- # return 0 00:24:43.238 13:48:22 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:43.238 13:48:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:43.238 13:48:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:43.238 13:48:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:43.497 13:48:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:43.756 [2024-07-10 13:48:22.912071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:43.756 [2024-07-10 13:48:22.912190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.756 [2024-07-10 13:48:22.912248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:43.756 [2024-07-10 13:48:22.912299] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.756 [2024-07-10 13:48:22.914419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.756 [2024-07-10 13:48:22.914532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:43.756 [2024-07-10 13:48:22.914691] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:43.756 [2024-07-10 13:48:22.914803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.756 BaseBdev1 00:24:43.756 13:48:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:43.756 13:48:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:43.756 13:48:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:44.013 13:48:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:44.013 [2024-07-10 13:48:23.287418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:44.013 [2024-07-10 13:48:23.287569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.013 [2024-07-10 13:48:23.287615] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:44.013 [2024-07-10 13:48:23.287660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.013 [2024-07-10 13:48:23.288102] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.013 [2024-07-10 13:48:23.288174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:44.013 [2024-07-10 13:48:23.288298] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:44.013 [2024-07-10 13:48:23.288329] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:44.013 [2024-07-10 13:48:23.288349] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.013 [2024-07-10 13:48:23.288381] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:24:44.013 [2024-07-10 13:48:23.288466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.013 BaseBdev2 00:24:44.013 13:48:23 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:44.013 13:48:23 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:44.013 13:48:23 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:44.270 13:48:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:44.528 [2024-07-10 13:48:23.694728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:44.528 [2024-07-10 13:48:23.694876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.528 [2024-07-10 13:48:23.694927] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:44.528 [2024-07-10 13:48:23.694960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.528 [2024-07-10 13:48:23.695431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.528 [2024-07-10 13:48:23.695508] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:44.528 [2024-07-10 13:48:23.695642] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:44.528 [2024-07-10 13:48:23.695689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:44.528 BaseBdev3 00:24:44.528 13:48:23 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:44.786 13:48:23 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:44.786 [2024-07-10 13:48:24.113992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:44.786 [2024-07-10 13:48:24.114121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.786 [2024-07-10 13:48:24.114166] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:44.786 [2024-07-10 13:48:24.114206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.786 [2024-07-10 13:48:24.114663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.786 [2024-07-10 13:48:24.114733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:44.786 [2024-07-10 13:48:24.114877] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:44.786 [2024-07-10 13:48:24.114950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:44.786 spare 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.786 13:48:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.043 [2024-07-10 13:48:24.214899] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:24:45.043 [2024-07-10 13:48:24.214985] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:45.043 [2024-07-10 13:48:24.215164] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004b590 00:24:45.043 [2024-07-10 13:48:24.221113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:24:45.043 [2024-07-10 13:48:24.221174] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:24:45.043 [2024-07-10 13:48:24.221394] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.043 13:48:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.043 "name": "raid_bdev1", 00:24:45.043 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:45.043 "strip_size_kb": 64, 00:24:45.043 "state": "online", 00:24:45.043 "raid_level": "raid5f", 00:24:45.043 "superblock": true, 00:24:45.043 "num_base_bdevs": 3, 00:24:45.043 "num_base_bdevs_discovered": 3, 00:24:45.043 "num_base_bdevs_operational": 3, 00:24:45.043 "base_bdevs_list": [ 00:24:45.043 { 00:24:45.043 "name": "spare", 00:24:45.043 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:45.043 "is_configured": true, 00:24:45.043 "data_offset": 2048, 00:24:45.043 "data_size": 63488 00:24:45.043 }, 00:24:45.043 { 00:24:45.043 "name": "BaseBdev2", 00:24:45.043 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:45.043 "is_configured": true, 00:24:45.043 "data_offset": 2048, 00:24:45.043 "data_size": 63488 00:24:45.043 }, 00:24:45.043 { 00:24:45.043 "name": "BaseBdev3", 00:24:45.043 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:45.043 "is_configured": true, 00:24:45.043 "data_offset": 2048, 00:24:45.043 "data_size": 63488 00:24:45.043 } 00:24:45.043 ] 00:24:45.043 }' 00:24:45.043 13:48:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.043 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.612 13:48:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.872 "name": "raid_bdev1", 00:24:45.872 "uuid": "96ebd90d-6745-4d27-8caa-eb806f2c1d28", 00:24:45.872 "strip_size_kb": 64, 00:24:45.872 "state": "online", 00:24:45.872 "raid_level": "raid5f", 00:24:45.872 "superblock": true, 00:24:45.872 "num_base_bdevs": 3, 00:24:45.872 "num_base_bdevs_discovered": 3, 00:24:45.872 "num_base_bdevs_operational": 3, 00:24:45.872 "base_bdevs_list": [ 00:24:45.872 { 00:24:45.872 "name": "spare", 00:24:45.872 "uuid": "0b3d7860-ba22-5d7c-b156-463933e377c5", 00:24:45.872 "is_configured": true, 00:24:45.872 "data_offset": 2048, 00:24:45.872 "data_size": 63488 00:24:45.872 }, 00:24:45.872 { 00:24:45.872 "name": "BaseBdev2", 00:24:45.872 "uuid": "4dc8125d-2958-5995-86f6-d0755105e9be", 00:24:45.872 "is_configured": true, 00:24:45.872 "data_offset": 2048, 00:24:45.872 "data_size": 63488 00:24:45.872 }, 00:24:45.872 { 00:24:45.872 "name": "BaseBdev3", 00:24:45.872 "uuid": "5dec71fd-117b-5f47-acb3-198b8876c20a", 00:24:45.872 "is_configured": true, 00:24:45.872 "data_offset": 2048, 00:24:45.872 "data_size": 63488 00:24:45.872 } 00:24:45.872 ] 00:24:45.872 }' 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.872 13:48:25 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:46.131 13:48:25 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.131 13:48:25 -- bdev/bdev_raid.sh@709 -- # killprocess 132570 00:24:46.131 13:48:25 -- common/autotest_common.sh@926 -- # '[' -z 132570 ']' 00:24:46.131 13:48:25 -- common/autotest_common.sh@930 -- # kill -0 132570 00:24:46.131 13:48:25 -- common/autotest_common.sh@931 -- # uname 00:24:46.131 13:48:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:46.131 13:48:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132570 00:24:46.131 killing process with pid 132570 00:24:46.131 Received shutdown signal, test time was about 60.000000 seconds 00:24:46.131 00:24:46.131 Latency(us) 00:24:46.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.131 =================================================================================================================== 00:24:46.131 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:46.131 13:48:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:46.131 13:48:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:46.131 13:48:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132570' 00:24:46.131 13:48:25 -- common/autotest_common.sh@945 -- # kill 132570 00:24:46.131 13:48:25 -- common/autotest_common.sh@950 -- # wait 132570 00:24:46.131 [2024-07-10 13:48:25.442308] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.131 [2024-07-10 13:48:25.442379] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.131 [2024-07-10 13:48:25.442486] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.131 [2024-07-10 13:48:25.442505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:24:46.700 [2024-07-10 13:48:25.805745] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.081 ************************************ 00:24:48.081 END TEST raid5f_rebuild_test_sb 00:24:48.081 ************************************ 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:48.081 00:24:48.081 real 0m23.667s 00:24:48.081 user 0m36.379s 00:24:48.081 sys 0m2.969s 00:24:48.081 13:48:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.081 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:48.081 13:48:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:48.081 13:48:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:48.081 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:24:48.081 ************************************ 00:24:48.081 START TEST raid5f_state_function_test 00:24:48.081 ************************************ 00:24:48.081 13:48:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=133243 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133243' 00:24:48.081 Process raid pid: 133243 00:24:48.081 13:48:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133243 /var/tmp/spdk-raid.sock 00:24:48.081 13:48:27 -- common/autotest_common.sh@819 -- # '[' -z 133243 ']' 00:24:48.081 13:48:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:48.081 13:48:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:48.081 13:48:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:48.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:48.081 13:48:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:48.081 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:24:48.081 [2024-07-10 13:48:27.170694] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:48.081 [2024-07-10 13:48:27.170927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.081 [2024-07-10 13:48:27.331477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.339 [2024-07-10 13:48:27.532505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.599 [2024-07-10 13:48:27.739954] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.858 13:48:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.858 13:48:27 -- common/autotest_common.sh@852 -- # return 0 00:24:48.858 13:48:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:48.858 [2024-07-10 13:48:28.137478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:48.858 [2024-07-10 13:48:28.137651] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:48.858 [2024-07-10 13:48:28.137692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:48.858 [2024-07-10 13:48:28.137727] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:48.858 [2024-07-10 13:48:28.137782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:48.858 [2024-07-10 13:48:28.137845] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:48.858 [2024-07-10 13:48:28.137877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:48.858 [2024-07-10 13:48:28.137914] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.858 13:48:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.117 13:48:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:49.117 "name": "Existed_Raid", 00:24:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.117 "strip_size_kb": 64, 00:24:49.117 "state": "configuring", 00:24:49.117 "raid_level": "raid5f", 00:24:49.117 "superblock": false, 00:24:49.117 "num_base_bdevs": 4, 00:24:49.117 "num_base_bdevs_discovered": 0, 00:24:49.117 "num_base_bdevs_operational": 4, 00:24:49.117 "base_bdevs_list": [ 00:24:49.117 { 00:24:49.117 "name": "BaseBdev1", 00:24:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.117 "is_configured": false, 00:24:49.117 "data_offset": 0, 00:24:49.117 "data_size": 0 00:24:49.117 }, 00:24:49.117 { 00:24:49.117 "name": "BaseBdev2", 00:24:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.117 "is_configured": false, 00:24:49.117 "data_offset": 0, 00:24:49.117 "data_size": 0 00:24:49.117 }, 00:24:49.117 { 00:24:49.117 "name": "BaseBdev3", 00:24:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.117 "is_configured": false, 00:24:49.117 "data_offset": 0, 00:24:49.117 "data_size": 0 00:24:49.117 }, 00:24:49.117 { 00:24:49.117 "name": "BaseBdev4", 00:24:49.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.117 "is_configured": false, 00:24:49.117 "data_offset": 0, 00:24:49.117 "data_size": 0 00:24:49.117 } 00:24:49.117 ] 00:24:49.117 }' 00:24:49.117 13:48:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:49.117 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:24:49.685 13:48:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:49.943 [2024-07-10 13:48:29.075710] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:49.943 [2024-07-10 13:48:29.075832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:49.943 13:48:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:49.943 [2024-07-10 13:48:29.271406] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:49.943 [2024-07-10 13:48:29.271533] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:49.943 [2024-07-10 13:48:29.271584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:49.943 [2024-07-10 13:48:29.271645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:49.943 [2024-07-10 13:48:29.271680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:49.943 [2024-07-10 13:48:29.271739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:49.943 [2024-07-10 13:48:29.271769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:49.943 [2024-07-10 13:48:29.271822] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:49.943 13:48:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:50.202 [2024-07-10 13:48:29.490154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:50.202 BaseBdev1 00:24:50.202 13:48:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:50.202 13:48:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:50.202 13:48:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:50.202 13:48:29 -- common/autotest_common.sh@889 -- # local i 00:24:50.202 13:48:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:50.202 13:48:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:50.202 13:48:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:50.461 13:48:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:50.720 [ 00:24:50.720 { 00:24:50.720 "name": "BaseBdev1", 00:24:50.720 "aliases": [ 00:24:50.720 "1972962c-7e42-4cf3-b751-aaf7771b3c8e" 00:24:50.720 ], 00:24:50.720 "product_name": "Malloc disk", 00:24:50.720 "block_size": 512, 00:24:50.720 "num_blocks": 65536, 00:24:50.720 "uuid": "1972962c-7e42-4cf3-b751-aaf7771b3c8e", 00:24:50.720 "assigned_rate_limits": { 00:24:50.720 "rw_ios_per_sec": 0, 00:24:50.720 "rw_mbytes_per_sec": 0, 00:24:50.720 "r_mbytes_per_sec": 0, 00:24:50.720 "w_mbytes_per_sec": 0 00:24:50.720 }, 00:24:50.720 "claimed": true, 00:24:50.720 "claim_type": "exclusive_write", 00:24:50.720 "zoned": false, 00:24:50.720 "supported_io_types": { 00:24:50.720 "read": true, 00:24:50.720 "write": true, 00:24:50.720 "unmap": true, 00:24:50.720 "write_zeroes": true, 00:24:50.720 "flush": true, 00:24:50.720 "reset": true, 00:24:50.720 "compare": false, 00:24:50.720 "compare_and_write": false, 00:24:50.720 "abort": true, 00:24:50.720 "nvme_admin": false, 00:24:50.720 "nvme_io": false 00:24:50.720 }, 00:24:50.720 "memory_domains": [ 00:24:50.720 { 00:24:50.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.720 "dma_device_type": 2 00:24:50.720 } 00:24:50.720 ], 00:24:50.720 "driver_specific": {} 00:24:50.720 } 00:24:50.720 ] 00:24:50.720 13:48:29 -- common/autotest_common.sh@895 -- # return 0 00:24:50.720 13:48:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:50.720 13:48:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.721 13:48:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.721 13:48:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.721 "name": "Existed_Raid", 00:24:50.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.721 "strip_size_kb": 64, 00:24:50.721 "state": "configuring", 00:24:50.721 "raid_level": "raid5f", 00:24:50.721 "superblock": false, 00:24:50.721 "num_base_bdevs": 4, 00:24:50.721 "num_base_bdevs_discovered": 1, 00:24:50.721 "num_base_bdevs_operational": 4, 00:24:50.721 "base_bdevs_list": [ 00:24:50.721 { 00:24:50.721 "name": "BaseBdev1", 00:24:50.721 "uuid": "1972962c-7e42-4cf3-b751-aaf7771b3c8e", 00:24:50.721 "is_configured": true, 00:24:50.721 "data_offset": 0, 00:24:50.721 "data_size": 65536 00:24:50.721 }, 00:24:50.721 { 00:24:50.721 "name": "BaseBdev2", 00:24:50.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.721 "is_configured": false, 00:24:50.721 "data_offset": 0, 00:24:50.721 "data_size": 0 00:24:50.721 }, 00:24:50.721 { 00:24:50.721 "name": "BaseBdev3", 00:24:50.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.721 "is_configured": false, 00:24:50.721 "data_offset": 0, 00:24:50.721 "data_size": 0 00:24:50.721 }, 00:24:50.721 { 00:24:50.721 "name": "BaseBdev4", 00:24:50.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.721 "is_configured": false, 00:24:50.721 "data_offset": 0, 00:24:50.721 "data_size": 0 00:24:50.721 } 00:24:50.721 ] 00:24:50.721 }' 00:24:50.721 13:48:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.721 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:24:51.289 13:48:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:51.548 [2024-07-10 13:48:30.807931] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:51.548 [2024-07-10 13:48:30.808062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:51.548 13:48:30 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:51.548 13:48:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:51.805 [2024-07-10 13:48:31.003650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:51.805 [2024-07-10 13:48:31.005345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:51.805 [2024-07-10 13:48:31.005459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:51.805 [2024-07-10 13:48:31.005494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:51.805 [2024-07-10 13:48:31.005535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:51.805 [2024-07-10 13:48:31.005575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:51.805 [2024-07-10 13:48:31.005614] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.805 13:48:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.063 13:48:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.063 "name": "Existed_Raid", 00:24:52.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.063 "strip_size_kb": 64, 00:24:52.063 "state": "configuring", 00:24:52.063 "raid_level": "raid5f", 00:24:52.063 "superblock": false, 00:24:52.063 "num_base_bdevs": 4, 00:24:52.063 "num_base_bdevs_discovered": 1, 00:24:52.063 "num_base_bdevs_operational": 4, 00:24:52.063 "base_bdevs_list": [ 00:24:52.063 { 00:24:52.063 "name": "BaseBdev1", 00:24:52.063 "uuid": "1972962c-7e42-4cf3-b751-aaf7771b3c8e", 00:24:52.063 "is_configured": true, 00:24:52.063 "data_offset": 0, 00:24:52.063 "data_size": 65536 00:24:52.063 }, 00:24:52.063 { 00:24:52.063 "name": "BaseBdev2", 00:24:52.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.063 "is_configured": false, 00:24:52.063 "data_offset": 0, 00:24:52.063 "data_size": 0 00:24:52.063 }, 00:24:52.063 { 00:24:52.063 "name": "BaseBdev3", 00:24:52.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.063 "is_configured": false, 00:24:52.063 "data_offset": 0, 00:24:52.063 "data_size": 0 00:24:52.063 }, 00:24:52.063 { 00:24:52.063 "name": "BaseBdev4", 00:24:52.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.063 "is_configured": false, 00:24:52.063 "data_offset": 0, 00:24:52.063 "data_size": 0 00:24:52.063 } 00:24:52.063 ] 00:24:52.063 }' 00:24:52.063 13:48:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.063 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:24:52.628 13:48:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:52.886 [2024-07-10 13:48:32.051972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:52.886 BaseBdev2 00:24:52.886 13:48:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:52.886 13:48:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:52.886 13:48:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:52.886 13:48:32 -- common/autotest_common.sh@889 -- # local i 00:24:52.886 13:48:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:52.886 13:48:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:52.886 13:48:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:53.144 13:48:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:53.144 [ 00:24:53.144 { 00:24:53.144 "name": "BaseBdev2", 00:24:53.144 "aliases": [ 00:24:53.144 "d67f3bc7-f0a5-4e99-8131-015ee1ee822b" 00:24:53.144 ], 00:24:53.144 "product_name": "Malloc disk", 00:24:53.144 "block_size": 512, 00:24:53.144 "num_blocks": 65536, 00:24:53.144 "uuid": "d67f3bc7-f0a5-4e99-8131-015ee1ee822b", 00:24:53.144 "assigned_rate_limits": { 00:24:53.144 "rw_ios_per_sec": 0, 00:24:53.144 "rw_mbytes_per_sec": 0, 00:24:53.144 "r_mbytes_per_sec": 0, 00:24:53.144 "w_mbytes_per_sec": 0 00:24:53.144 }, 00:24:53.144 "claimed": true, 00:24:53.144 "claim_type": "exclusive_write", 00:24:53.144 "zoned": false, 00:24:53.144 "supported_io_types": { 00:24:53.144 "read": true, 00:24:53.144 "write": true, 00:24:53.144 "unmap": true, 00:24:53.144 "write_zeroes": true, 00:24:53.144 "flush": true, 00:24:53.144 "reset": true, 00:24:53.144 "compare": false, 00:24:53.144 "compare_and_write": false, 00:24:53.144 "abort": true, 00:24:53.144 "nvme_admin": false, 00:24:53.144 "nvme_io": false 00:24:53.144 }, 00:24:53.144 "memory_domains": [ 00:24:53.144 { 00:24:53.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.144 "dma_device_type": 2 00:24:53.144 } 00:24:53.144 ], 00:24:53.144 "driver_specific": {} 00:24:53.144 } 00:24:53.144 ] 00:24:53.144 13:48:32 -- common/autotest_common.sh@895 -- # return 0 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.144 13:48:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.402 13:48:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:53.402 "name": "Existed_Raid", 00:24:53.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.402 "strip_size_kb": 64, 00:24:53.402 "state": "configuring", 00:24:53.402 "raid_level": "raid5f", 00:24:53.402 "superblock": false, 00:24:53.402 "num_base_bdevs": 4, 00:24:53.402 "num_base_bdevs_discovered": 2, 00:24:53.402 "num_base_bdevs_operational": 4, 00:24:53.402 "base_bdevs_list": [ 00:24:53.402 { 00:24:53.402 "name": "BaseBdev1", 00:24:53.402 "uuid": "1972962c-7e42-4cf3-b751-aaf7771b3c8e", 00:24:53.402 "is_configured": true, 00:24:53.402 "data_offset": 0, 00:24:53.402 "data_size": 65536 00:24:53.402 }, 00:24:53.402 { 00:24:53.402 "name": "BaseBdev2", 00:24:53.402 "uuid": "d67f3bc7-f0a5-4e99-8131-015ee1ee822b", 00:24:53.402 "is_configured": true, 00:24:53.402 "data_offset": 0, 00:24:53.402 "data_size": 65536 00:24:53.402 }, 00:24:53.402 { 00:24:53.402 "name": "BaseBdev3", 00:24:53.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.402 "is_configured": false, 00:24:53.402 "data_offset": 0, 00:24:53.402 "data_size": 0 00:24:53.402 }, 00:24:53.402 { 00:24:53.402 "name": "BaseBdev4", 00:24:53.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.402 "is_configured": false, 00:24:53.402 "data_offset": 0, 00:24:53.402 "data_size": 0 00:24:53.402 } 00:24:53.402 ] 00:24:53.402 }' 00:24:53.402 13:48:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:53.402 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:24:54.025 13:48:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:54.304 [2024-07-10 13:48:33.543938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:54.304 BaseBdev3 00:24:54.304 13:48:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:54.304 13:48:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:54.304 13:48:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:54.304 13:48:33 -- common/autotest_common.sh@889 -- # local i 00:24:54.304 13:48:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:54.304 13:48:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:54.304 13:48:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:54.562 13:48:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:54.820 [ 00:24:54.820 { 00:24:54.820 "name": "BaseBdev3", 00:24:54.820 "aliases": [ 00:24:54.820 "cfc8c05f-be8b-458b-adea-1e71f9c9866e" 00:24:54.820 ], 00:24:54.820 "product_name": "Malloc disk", 00:24:54.820 "block_size": 512, 00:24:54.820 "num_blocks": 65536, 00:24:54.820 "uuid": "cfc8c05f-be8b-458b-adea-1e71f9c9866e", 00:24:54.820 "assigned_rate_limits": { 00:24:54.820 "rw_ios_per_sec": 0, 00:24:54.820 "rw_mbytes_per_sec": 0, 00:24:54.820 "r_mbytes_per_sec": 0, 00:24:54.820 "w_mbytes_per_sec": 0 00:24:54.820 }, 00:24:54.820 "claimed": true, 00:24:54.820 "claim_type": "exclusive_write", 00:24:54.820 "zoned": false, 00:24:54.820 "supported_io_types": { 00:24:54.820 "read": true, 00:24:54.820 "write": true, 00:24:54.820 "unmap": true, 00:24:54.820 "write_zeroes": true, 00:24:54.820 "flush": true, 00:24:54.820 "reset": true, 00:24:54.820 "compare": false, 00:24:54.820 "compare_and_write": false, 00:24:54.820 "abort": true, 00:24:54.820 "nvme_admin": false, 00:24:54.820 "nvme_io": false 00:24:54.820 }, 00:24:54.820 "memory_domains": [ 00:24:54.820 { 00:24:54.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.820 "dma_device_type": 2 00:24:54.820 } 00:24:54.820 ], 00:24:54.820 "driver_specific": {} 00:24:54.820 } 00:24:54.820 ] 00:24:54.820 13:48:33 -- common/autotest_common.sh@895 -- # return 0 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.820 13:48:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.820 13:48:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.820 "name": "Existed_Raid", 00:24:54.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.820 "strip_size_kb": 64, 00:24:54.820 "state": "configuring", 00:24:54.820 "raid_level": "raid5f", 00:24:54.820 "superblock": false, 00:24:54.820 "num_base_bdevs": 4, 00:24:54.820 "num_base_bdevs_discovered": 3, 00:24:54.820 "num_base_bdevs_operational": 4, 00:24:54.820 "base_bdevs_list": [ 00:24:54.820 { 00:24:54.820 "name": "BaseBdev1", 00:24:54.820 "uuid": "1972962c-7e42-4cf3-b751-aaf7771b3c8e", 00:24:54.820 "is_configured": true, 00:24:54.820 "data_offset": 0, 00:24:54.820 "data_size": 65536 00:24:54.820 }, 00:24:54.820 { 00:24:54.820 "name": "BaseBdev2", 00:24:54.820 "uuid": "d67f3bc7-f0a5-4e99-8131-015ee1ee822b", 00:24:54.820 "is_configured": true, 00:24:54.820 "data_offset": 0, 00:24:54.820 "data_size": 65536 00:24:54.820 }, 00:24:54.820 { 00:24:54.820 "name": "BaseBdev3", 00:24:54.820 "uuid": "cfc8c05f-be8b-458b-adea-1e71f9c9866e", 00:24:54.820 "is_configured": true, 00:24:54.820 "data_offset": 0, 00:24:54.820 "data_size": 65536 00:24:54.820 }, 00:24:54.820 { 00:24:54.820 "name": "BaseBdev4", 00:24:54.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.820 "is_configured": false, 00:24:54.820 "data_offset": 0, 00:24:54.820 "data_size": 0 00:24:54.820 } 00:24:54.820 ] 00:24:54.820 }' 00:24:54.820 13:48:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.820 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:24:55.750 13:48:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:55.750 [2024-07-10 13:48:34.963318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:55.750 [2024-07-10 13:48:34.963452] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:55.750 [2024-07-10 13:48:34.963480] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:55.750 [2024-07-10 13:48:34.963650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:55.750 [2024-07-10 13:48:34.971373] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:55.750 [2024-07-10 13:48:34.971431] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:55.750 [2024-07-10 13:48:34.971723] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.750 BaseBdev4 00:24:55.750 13:48:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:55.750 13:48:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:55.750 13:48:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:55.750 13:48:34 -- common/autotest_common.sh@889 -- # local i 00:24:55.750 13:48:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:55.750 13:48:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:55.750 13:48:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:56.007 13:48:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:56.265 [ 00:24:56.265 { 00:24:56.265 "name": "BaseBdev4", 00:24:56.265 "aliases": [ 00:24:56.265 "bf9ed11c-5965-4f29-93d9-c60d3038ff12" 00:24:56.265 ], 00:24:56.265 "product_name": "Malloc disk", 00:24:56.265 "block_size": 512, 00:24:56.265 "num_blocks": 65536, 00:24:56.265 "uuid": "bf9ed11c-5965-4f29-93d9-c60d3038ff12", 00:24:56.265 "assigned_rate_limits": { 00:24:56.265 "rw_ios_per_sec": 0, 00:24:56.265 "rw_mbytes_per_sec": 0, 00:24:56.265 "r_mbytes_per_sec": 0, 00:24:56.265 "w_mbytes_per_sec": 0 00:24:56.265 }, 00:24:56.265 "claimed": true, 00:24:56.265 "claim_type": "exclusive_write", 00:24:56.265 "zoned": false, 00:24:56.265 "supported_io_types": { 00:24:56.265 "read": true, 00:24:56.265 "write": true, 00:24:56.265 "unmap": true, 00:24:56.265 "write_zeroes": true, 00:24:56.265 "flush": true, 00:24:56.265 "reset": true, 00:24:56.265 "compare": false, 00:24:56.265 "compare_and_write": false, 00:24:56.265 "abort": true, 00:24:56.265 "nvme_admin": false, 00:24:56.265 "nvme_io": false 00:24:56.265 }, 00:24:56.265 "memory_domains": [ 00:24:56.265 { 00:24:56.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.265 "dma_device_type": 2 00:24:56.265 } 00:24:56.265 ], 00:24:56.265 "driver_specific": {} 00:24:56.265 } 00:24:56.265 ] 00:24:56.265 13:48:35 -- common/autotest_common.sh@895 -- # return 0 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.265 "name": "Existed_Raid", 00:24:56.265 "uuid": "6f413dae-f014-44ce-82d8-36e39ca8bc54", 00:24:56.265 "strip_size_kb": 64, 00:24:56.265 "state": "online", 00:24:56.265 "raid_level": "raid5f", 00:24:56.265 "superblock": false, 00:24:56.265 "num_base_bdevs": 4, 00:24:56.265 "num_base_bdevs_discovered": 4, 00:24:56.265 "num_base_bdevs_operational": 4, 00:24:56.265 "base_bdevs_list": [ 00:24:56.265 { 00:24:56.265 "name": "BaseBdev1", 00:24:56.265 "uuid": "1972962c-7e42-4cf3-b751-aaf7771b3c8e", 00:24:56.265 "is_configured": true, 00:24:56.265 "data_offset": 0, 00:24:56.265 "data_size": 65536 00:24:56.265 }, 00:24:56.265 { 00:24:56.265 "name": "BaseBdev2", 00:24:56.265 "uuid": "d67f3bc7-f0a5-4e99-8131-015ee1ee822b", 00:24:56.265 "is_configured": true, 00:24:56.265 "data_offset": 0, 00:24:56.265 "data_size": 65536 00:24:56.265 }, 00:24:56.265 { 00:24:56.265 "name": "BaseBdev3", 00:24:56.265 "uuid": "cfc8c05f-be8b-458b-adea-1e71f9c9866e", 00:24:56.265 "is_configured": true, 00:24:56.265 "data_offset": 0, 00:24:56.265 "data_size": 65536 00:24:56.265 }, 00:24:56.265 { 00:24:56.265 "name": "BaseBdev4", 00:24:56.265 "uuid": "bf9ed11c-5965-4f29-93d9-c60d3038ff12", 00:24:56.265 "is_configured": true, 00:24:56.265 "data_offset": 0, 00:24:56.265 "data_size": 65536 00:24:56.265 } 00:24:56.265 ] 00:24:56.265 }' 00:24:56.265 13:48:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.265 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:57.201 [2024-07-10 13:48:36.385668] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.201 13:48:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.459 13:48:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.459 "name": "Existed_Raid", 00:24:57.459 "uuid": "6f413dae-f014-44ce-82d8-36e39ca8bc54", 00:24:57.459 "strip_size_kb": 64, 00:24:57.459 "state": "online", 00:24:57.459 "raid_level": "raid5f", 00:24:57.459 "superblock": false, 00:24:57.459 "num_base_bdevs": 4, 00:24:57.459 "num_base_bdevs_discovered": 3, 00:24:57.459 "num_base_bdevs_operational": 3, 00:24:57.459 "base_bdevs_list": [ 00:24:57.459 { 00:24:57.459 "name": null, 00:24:57.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.459 "is_configured": false, 00:24:57.459 "data_offset": 0, 00:24:57.459 "data_size": 65536 00:24:57.459 }, 00:24:57.459 { 00:24:57.459 "name": "BaseBdev2", 00:24:57.459 "uuid": "d67f3bc7-f0a5-4e99-8131-015ee1ee822b", 00:24:57.459 "is_configured": true, 00:24:57.459 "data_offset": 0, 00:24:57.459 "data_size": 65536 00:24:57.459 }, 00:24:57.459 { 00:24:57.459 "name": "BaseBdev3", 00:24:57.459 "uuid": "cfc8c05f-be8b-458b-adea-1e71f9c9866e", 00:24:57.459 "is_configured": true, 00:24:57.459 "data_offset": 0, 00:24:57.459 "data_size": 65536 00:24:57.459 }, 00:24:57.459 { 00:24:57.459 "name": "BaseBdev4", 00:24:57.459 "uuid": "bf9ed11c-5965-4f29-93d9-c60d3038ff12", 00:24:57.459 "is_configured": true, 00:24:57.459 "data_offset": 0, 00:24:57.459 "data_size": 65536 00:24:57.459 } 00:24:57.459 ] 00:24:57.459 }' 00:24:57.459 13:48:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.459 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:24:58.025 13:48:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:58.025 13:48:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:58.025 13:48:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.025 13:48:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:58.282 13:48:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:58.282 13:48:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:58.282 13:48:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:58.541 [2024-07-10 13:48:37.736235] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:58.541 [2024-07-10 13:48:37.736352] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:58.541 [2024-07-10 13:48:37.736463] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.541 13:48:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:58.541 13:48:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:58.541 13:48:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.541 13:48:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:58.801 13:48:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:58.801 13:48:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:58.801 13:48:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:59.060 [2024-07-10 13:48:38.244479] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:59.060 13:48:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:59.060 13:48:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:59.060 13:48:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.060 13:48:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:59.319 13:48:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:59.319 13:48:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:59.319 13:48:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:59.578 [2024-07-10 13:48:38.784931] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:59.578 [2024-07-10 13:48:38.785067] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:59.578 13:48:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:59.578 13:48:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:59.578 13:48:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.578 13:48:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:59.837 13:48:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:59.837 13:48:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:59.837 13:48:39 -- bdev/bdev_raid.sh@287 -- # killprocess 133243 00:24:59.837 13:48:39 -- common/autotest_common.sh@926 -- # '[' -z 133243 ']' 00:24:59.837 13:48:39 -- common/autotest_common.sh@930 -- # kill -0 133243 00:24:59.837 13:48:39 -- common/autotest_common.sh@931 -- # uname 00:24:59.837 13:48:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.837 13:48:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133243 00:24:59.837 killing process with pid 133243 00:24:59.837 13:48:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:59.837 13:48:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:59.837 13:48:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133243' 00:24:59.837 13:48:39 -- common/autotest_common.sh@945 -- # kill 133243 00:24:59.837 13:48:39 -- common/autotest_common.sh@950 -- # wait 133243 00:24:59.837 [2024-07-10 13:48:39.136175] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.837 [2024-07-10 13:48:39.136290] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.241 ************************************ 00:25:01.241 END TEST raid5f_state_function_test 00:25:01.241 ************************************ 00:25:01.241 13:48:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:01.241 00:25:01.241 real 0m13.402s 00:25:01.241 user 0m23.325s 00:25:01.241 sys 0m1.716s 00:25:01.242 13:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.242 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:25:01.242 13:48:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:01.242 13:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.242 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.242 ************************************ 00:25:01.242 START TEST raid5f_state_function_test_sb 00:25:01.242 ************************************ 00:25:01.242 13:48:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=133685 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:01.242 Process raid pid: 133685 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133685' 00:25:01.242 13:48:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133685 /var/tmp/spdk-raid.sock 00:25:01.242 13:48:40 -- common/autotest_common.sh@819 -- # '[' -z 133685 ']' 00:25:01.242 13:48:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:01.242 13:48:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:01.242 13:48:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:01.242 13:48:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:01.242 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.502 [2024-07-10 13:48:40.641494] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:01.502 [2024-07-10 13:48:40.642171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.502 [2024-07-10 13:48:40.803333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.761 [2024-07-10 13:48:41.024993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.019 [2024-07-10 13:48:41.258311] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.278 13:48:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:02.278 13:48:41 -- common/autotest_common.sh@852 -- # return 0 00:25:02.278 13:48:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:02.537 [2024-07-10 13:48:41.713278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:02.537 [2024-07-10 13:48:41.713421] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:02.537 [2024-07-10 13:48:41.713449] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.537 [2024-07-10 13:48:41.713476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.537 [2024-07-10 13:48:41.713491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:02.537 [2024-07-10 13:48:41.713531] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:02.537 [2024-07-10 13:48:41.713553] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:02.537 [2024-07-10 13:48:41.713604] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.537 13:48:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.796 13:48:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:02.796 "name": "Existed_Raid", 00:25:02.796 "uuid": "7666379a-0ebc-4ba1-af00-efbd1f7ec915", 00:25:02.796 "strip_size_kb": 64, 00:25:02.796 "state": "configuring", 00:25:02.796 "raid_level": "raid5f", 00:25:02.796 "superblock": true, 00:25:02.796 "num_base_bdevs": 4, 00:25:02.796 "num_base_bdevs_discovered": 0, 00:25:02.796 "num_base_bdevs_operational": 4, 00:25:02.796 "base_bdevs_list": [ 00:25:02.796 { 00:25:02.796 "name": "BaseBdev1", 00:25:02.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.796 "is_configured": false, 00:25:02.796 "data_offset": 0, 00:25:02.796 "data_size": 0 00:25:02.796 }, 00:25:02.797 { 00:25:02.797 "name": "BaseBdev2", 00:25:02.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.797 "is_configured": false, 00:25:02.797 "data_offset": 0, 00:25:02.797 "data_size": 0 00:25:02.797 }, 00:25:02.797 { 00:25:02.797 "name": "BaseBdev3", 00:25:02.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.797 "is_configured": false, 00:25:02.797 "data_offset": 0, 00:25:02.797 "data_size": 0 00:25:02.797 }, 00:25:02.797 { 00:25:02.797 "name": "BaseBdev4", 00:25:02.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.797 "is_configured": false, 00:25:02.797 "data_offset": 0, 00:25:02.797 "data_size": 0 00:25:02.797 } 00:25:02.797 ] 00:25:02.797 }' 00:25:02.797 13:48:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:02.797 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:25:03.392 13:48:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:03.651 [2024-07-10 13:48:42.755321] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.651 [2024-07-10 13:48:42.755429] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:03.651 13:48:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:03.651 [2024-07-10 13:48:42.939087] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.651 [2024-07-10 13:48:42.939207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.651 [2024-07-10 13:48:42.939233] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.651 [2024-07-10 13:48:42.939279] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.651 [2024-07-10 13:48:42.939325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.651 [2024-07-10 13:48:42.939368] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.651 [2024-07-10 13:48:42.939413] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:03.651 [2024-07-10 13:48:42.939452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:03.651 13:48:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:03.910 [2024-07-10 13:48:43.171312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.910 BaseBdev1 00:25:03.910 13:48:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:03.910 13:48:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:03.910 13:48:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:03.910 13:48:43 -- common/autotest_common.sh@889 -- # local i 00:25:03.910 13:48:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:03.910 13:48:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:03.910 13:48:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:04.169 13:48:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:04.427 [ 00:25:04.427 { 00:25:04.427 "name": "BaseBdev1", 00:25:04.427 "aliases": [ 00:25:04.427 "5d1d1c2a-efb2-4a90-bc23-96ff9a6ba805" 00:25:04.427 ], 00:25:04.427 "product_name": "Malloc disk", 00:25:04.427 "block_size": 512, 00:25:04.427 "num_blocks": 65536, 00:25:04.427 "uuid": "5d1d1c2a-efb2-4a90-bc23-96ff9a6ba805", 00:25:04.427 "assigned_rate_limits": { 00:25:04.427 "rw_ios_per_sec": 0, 00:25:04.427 "rw_mbytes_per_sec": 0, 00:25:04.427 "r_mbytes_per_sec": 0, 00:25:04.427 "w_mbytes_per_sec": 0 00:25:04.427 }, 00:25:04.427 "claimed": true, 00:25:04.427 "claim_type": "exclusive_write", 00:25:04.427 "zoned": false, 00:25:04.427 "supported_io_types": { 00:25:04.427 "read": true, 00:25:04.427 "write": true, 00:25:04.427 "unmap": true, 00:25:04.427 "write_zeroes": true, 00:25:04.427 "flush": true, 00:25:04.427 "reset": true, 00:25:04.427 "compare": false, 00:25:04.427 "compare_and_write": false, 00:25:04.427 "abort": true, 00:25:04.427 "nvme_admin": false, 00:25:04.427 "nvme_io": false 00:25:04.427 }, 00:25:04.427 "memory_domains": [ 00:25:04.427 { 00:25:04.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.427 "dma_device_type": 2 00:25:04.427 } 00:25:04.427 ], 00:25:04.427 "driver_specific": {} 00:25:04.427 } 00:25:04.427 ] 00:25:04.427 13:48:43 -- common/autotest_common.sh@895 -- # return 0 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:04.427 "name": "Existed_Raid", 00:25:04.427 "uuid": "66f1ce46-e973-43ee-9f04-4185cf9d9ef9", 00:25:04.427 "strip_size_kb": 64, 00:25:04.427 "state": "configuring", 00:25:04.427 "raid_level": "raid5f", 00:25:04.427 "superblock": true, 00:25:04.427 "num_base_bdevs": 4, 00:25:04.427 "num_base_bdevs_discovered": 1, 00:25:04.427 "num_base_bdevs_operational": 4, 00:25:04.427 "base_bdevs_list": [ 00:25:04.427 { 00:25:04.427 "name": "BaseBdev1", 00:25:04.427 "uuid": "5d1d1c2a-efb2-4a90-bc23-96ff9a6ba805", 00:25:04.427 "is_configured": true, 00:25:04.427 "data_offset": 2048, 00:25:04.427 "data_size": 63488 00:25:04.427 }, 00:25:04.427 { 00:25:04.427 "name": "BaseBdev2", 00:25:04.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.427 "is_configured": false, 00:25:04.427 "data_offset": 0, 00:25:04.427 "data_size": 0 00:25:04.427 }, 00:25:04.427 { 00:25:04.427 "name": "BaseBdev3", 00:25:04.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.427 "is_configured": false, 00:25:04.427 "data_offset": 0, 00:25:04.427 "data_size": 0 00:25:04.427 }, 00:25:04.427 { 00:25:04.427 "name": "BaseBdev4", 00:25:04.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.427 "is_configured": false, 00:25:04.427 "data_offset": 0, 00:25:04.427 "data_size": 0 00:25:04.427 } 00:25:04.427 ] 00:25:04.427 }' 00:25:04.427 13:48:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:04.427 13:48:43 -- common/autotest_common.sh@10 -- # set +x 00:25:05.363 13:48:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:05.363 [2024-07-10 13:48:44.636828] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.363 [2024-07-10 13:48:44.636927] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:05.363 13:48:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:25:05.363 13:48:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:05.623 13:48:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:05.882 BaseBdev1 00:25:05.882 13:48:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:25:05.882 13:48:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:25:05.882 13:48:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:05.882 13:48:45 -- common/autotest_common.sh@889 -- # local i 00:25:05.882 13:48:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:05.882 13:48:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:05.882 13:48:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.141 13:48:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:06.400 [ 00:25:06.400 { 00:25:06.400 "name": "BaseBdev1", 00:25:06.400 "aliases": [ 00:25:06.400 "0e442738-3b5a-4908-83b1-aadd9fb7e3f6" 00:25:06.400 ], 00:25:06.400 "product_name": "Malloc disk", 00:25:06.400 "block_size": 512, 00:25:06.400 "num_blocks": 65536, 00:25:06.400 "uuid": "0e442738-3b5a-4908-83b1-aadd9fb7e3f6", 00:25:06.400 "assigned_rate_limits": { 00:25:06.400 "rw_ios_per_sec": 0, 00:25:06.400 "rw_mbytes_per_sec": 0, 00:25:06.400 "r_mbytes_per_sec": 0, 00:25:06.400 "w_mbytes_per_sec": 0 00:25:06.400 }, 00:25:06.400 "claimed": false, 00:25:06.400 "zoned": false, 00:25:06.400 "supported_io_types": { 00:25:06.400 "read": true, 00:25:06.400 "write": true, 00:25:06.400 "unmap": true, 00:25:06.400 "write_zeroes": true, 00:25:06.400 "flush": true, 00:25:06.400 "reset": true, 00:25:06.400 "compare": false, 00:25:06.400 "compare_and_write": false, 00:25:06.400 "abort": true, 00:25:06.400 "nvme_admin": false, 00:25:06.400 "nvme_io": false 00:25:06.400 }, 00:25:06.400 "memory_domains": [ 00:25:06.400 { 00:25:06.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.400 "dma_device_type": 2 00:25:06.400 } 00:25:06.400 ], 00:25:06.400 "driver_specific": {} 00:25:06.400 } 00:25:06.400 ] 00:25:06.400 13:48:45 -- common/autotest_common.sh@895 -- # return 0 00:25:06.400 13:48:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:06.724 [2024-07-10 13:48:45.843415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.724 [2024-07-10 13:48:45.845313] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.724 [2024-07-10 13:48:45.845431] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.724 [2024-07-10 13:48:45.845482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:06.724 [2024-07-10 13:48:45.845529] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:06.724 [2024-07-10 13:48:45.845578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:06.724 [2024-07-10 13:48:45.845613] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.724 13:48:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.724 13:48:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.724 "name": "Existed_Raid", 00:25:06.724 "uuid": "181d7598-814d-49e8-9cb0-6d655ae8992b", 00:25:06.724 "strip_size_kb": 64, 00:25:06.724 "state": "configuring", 00:25:06.725 "raid_level": "raid5f", 00:25:06.725 "superblock": true, 00:25:06.725 "num_base_bdevs": 4, 00:25:06.725 "num_base_bdevs_discovered": 1, 00:25:06.725 "num_base_bdevs_operational": 4, 00:25:06.725 "base_bdevs_list": [ 00:25:06.725 { 00:25:06.725 "name": "BaseBdev1", 00:25:06.725 "uuid": "0e442738-3b5a-4908-83b1-aadd9fb7e3f6", 00:25:06.725 "is_configured": true, 00:25:06.725 "data_offset": 2048, 00:25:06.725 "data_size": 63488 00:25:06.725 }, 00:25:06.725 { 00:25:06.725 "name": "BaseBdev2", 00:25:06.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.725 "is_configured": false, 00:25:06.725 "data_offset": 0, 00:25:06.725 "data_size": 0 00:25:06.725 }, 00:25:06.725 { 00:25:06.725 "name": "BaseBdev3", 00:25:06.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.725 "is_configured": false, 00:25:06.725 "data_offset": 0, 00:25:06.725 "data_size": 0 00:25:06.725 }, 00:25:06.725 { 00:25:06.725 "name": "BaseBdev4", 00:25:06.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.725 "is_configured": false, 00:25:06.725 "data_offset": 0, 00:25:06.725 "data_size": 0 00:25:06.725 } 00:25:06.725 ] 00:25:06.725 }' 00:25:06.725 13:48:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.725 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:25:07.664 13:48:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:07.664 [2024-07-10 13:48:46.941117] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.664 BaseBdev2 00:25:07.664 13:48:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:07.664 13:48:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:25:07.664 13:48:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:07.664 13:48:46 -- common/autotest_common.sh@889 -- # local i 00:25:07.664 13:48:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:07.664 13:48:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:07.664 13:48:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.924 13:48:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:08.183 [ 00:25:08.183 { 00:25:08.183 "name": "BaseBdev2", 00:25:08.183 "aliases": [ 00:25:08.183 "40a67a37-9235-407f-a55d-339130b66df3" 00:25:08.183 ], 00:25:08.183 "product_name": "Malloc disk", 00:25:08.183 "block_size": 512, 00:25:08.183 "num_blocks": 65536, 00:25:08.183 "uuid": "40a67a37-9235-407f-a55d-339130b66df3", 00:25:08.183 "assigned_rate_limits": { 00:25:08.183 "rw_ios_per_sec": 0, 00:25:08.183 "rw_mbytes_per_sec": 0, 00:25:08.183 "r_mbytes_per_sec": 0, 00:25:08.183 "w_mbytes_per_sec": 0 00:25:08.183 }, 00:25:08.183 "claimed": true, 00:25:08.183 "claim_type": "exclusive_write", 00:25:08.183 "zoned": false, 00:25:08.183 "supported_io_types": { 00:25:08.183 "read": true, 00:25:08.183 "write": true, 00:25:08.183 "unmap": true, 00:25:08.183 "write_zeroes": true, 00:25:08.183 "flush": true, 00:25:08.183 "reset": true, 00:25:08.183 "compare": false, 00:25:08.183 "compare_and_write": false, 00:25:08.183 "abort": true, 00:25:08.183 "nvme_admin": false, 00:25:08.183 "nvme_io": false 00:25:08.183 }, 00:25:08.183 "memory_domains": [ 00:25:08.183 { 00:25:08.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.183 "dma_device_type": 2 00:25:08.183 } 00:25:08.183 ], 00:25:08.183 "driver_specific": {} 00:25:08.183 } 00:25:08.183 ] 00:25:08.183 13:48:47 -- common/autotest_common.sh@895 -- # return 0 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.183 13:48:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:08.183 "name": "Existed_Raid", 00:25:08.183 "uuid": "181d7598-814d-49e8-9cb0-6d655ae8992b", 00:25:08.183 "strip_size_kb": 64, 00:25:08.183 "state": "configuring", 00:25:08.183 "raid_level": "raid5f", 00:25:08.183 "superblock": true, 00:25:08.183 "num_base_bdevs": 4, 00:25:08.183 "num_base_bdevs_discovered": 2, 00:25:08.183 "num_base_bdevs_operational": 4, 00:25:08.183 "base_bdevs_list": [ 00:25:08.183 { 00:25:08.183 "name": "BaseBdev1", 00:25:08.183 "uuid": "0e442738-3b5a-4908-83b1-aadd9fb7e3f6", 00:25:08.183 "is_configured": true, 00:25:08.183 "data_offset": 2048, 00:25:08.184 "data_size": 63488 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "name": "BaseBdev2", 00:25:08.184 "uuid": "40a67a37-9235-407f-a55d-339130b66df3", 00:25:08.184 "is_configured": true, 00:25:08.184 "data_offset": 2048, 00:25:08.184 "data_size": 63488 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "name": "BaseBdev3", 00:25:08.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.184 "is_configured": false, 00:25:08.184 "data_offset": 0, 00:25:08.184 "data_size": 0 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "name": "BaseBdev4", 00:25:08.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.184 "is_configured": false, 00:25:08.184 "data_offset": 0, 00:25:08.184 "data_size": 0 00:25:08.184 } 00:25:08.184 ] 00:25:08.184 }' 00:25:08.184 13:48:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:08.184 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:25:09.123 13:48:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:09.123 [2024-07-10 13:48:48.363438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.123 BaseBdev3 00:25:09.123 13:48:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:09.123 13:48:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:25:09.123 13:48:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:09.123 13:48:48 -- common/autotest_common.sh@889 -- # local i 00:25:09.123 13:48:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:09.123 13:48:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:09.123 13:48:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.383 13:48:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:09.642 [ 00:25:09.642 { 00:25:09.642 "name": "BaseBdev3", 00:25:09.642 "aliases": [ 00:25:09.642 "e3afa409-221c-4f11-8b5d-45e78ca6e260" 00:25:09.642 ], 00:25:09.642 "product_name": "Malloc disk", 00:25:09.642 "block_size": 512, 00:25:09.642 "num_blocks": 65536, 00:25:09.642 "uuid": "e3afa409-221c-4f11-8b5d-45e78ca6e260", 00:25:09.642 "assigned_rate_limits": { 00:25:09.642 "rw_ios_per_sec": 0, 00:25:09.642 "rw_mbytes_per_sec": 0, 00:25:09.642 "r_mbytes_per_sec": 0, 00:25:09.642 "w_mbytes_per_sec": 0 00:25:09.642 }, 00:25:09.642 "claimed": true, 00:25:09.642 "claim_type": "exclusive_write", 00:25:09.642 "zoned": false, 00:25:09.642 "supported_io_types": { 00:25:09.642 "read": true, 00:25:09.642 "write": true, 00:25:09.642 "unmap": true, 00:25:09.642 "write_zeroes": true, 00:25:09.642 "flush": true, 00:25:09.642 "reset": true, 00:25:09.642 "compare": false, 00:25:09.642 "compare_and_write": false, 00:25:09.642 "abort": true, 00:25:09.642 "nvme_admin": false, 00:25:09.642 "nvme_io": false 00:25:09.642 }, 00:25:09.642 "memory_domains": [ 00:25:09.642 { 00:25:09.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.642 "dma_device_type": 2 00:25:09.642 } 00:25:09.642 ], 00:25:09.642 "driver_specific": {} 00:25:09.642 } 00:25:09.642 ] 00:25:09.642 13:48:48 -- common/autotest_common.sh@895 -- # return 0 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.642 13:48:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.901 13:48:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.901 "name": "Existed_Raid", 00:25:09.901 "uuid": "181d7598-814d-49e8-9cb0-6d655ae8992b", 00:25:09.901 "strip_size_kb": 64, 00:25:09.901 "state": "configuring", 00:25:09.901 "raid_level": "raid5f", 00:25:09.901 "superblock": true, 00:25:09.901 "num_base_bdevs": 4, 00:25:09.901 "num_base_bdevs_discovered": 3, 00:25:09.901 "num_base_bdevs_operational": 4, 00:25:09.901 "base_bdevs_list": [ 00:25:09.901 { 00:25:09.901 "name": "BaseBdev1", 00:25:09.901 "uuid": "0e442738-3b5a-4908-83b1-aadd9fb7e3f6", 00:25:09.901 "is_configured": true, 00:25:09.901 "data_offset": 2048, 00:25:09.901 "data_size": 63488 00:25:09.901 }, 00:25:09.901 { 00:25:09.901 "name": "BaseBdev2", 00:25:09.901 "uuid": "40a67a37-9235-407f-a55d-339130b66df3", 00:25:09.901 "is_configured": true, 00:25:09.901 "data_offset": 2048, 00:25:09.901 "data_size": 63488 00:25:09.901 }, 00:25:09.901 { 00:25:09.902 "name": "BaseBdev3", 00:25:09.902 "uuid": "e3afa409-221c-4f11-8b5d-45e78ca6e260", 00:25:09.902 "is_configured": true, 00:25:09.902 "data_offset": 2048, 00:25:09.902 "data_size": 63488 00:25:09.902 }, 00:25:09.902 { 00:25:09.902 "name": "BaseBdev4", 00:25:09.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.902 "is_configured": false, 00:25:09.902 "data_offset": 0, 00:25:09.902 "data_size": 0 00:25:09.902 } 00:25:09.902 ] 00:25:09.902 }' 00:25:09.902 13:48:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.902 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:25:10.469 13:48:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:10.728 [2024-07-10 13:48:49.867791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:10.728 [2024-07-10 13:48:49.868135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:25:10.728 [2024-07-10 13:48:49.868183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:10.728 [2024-07-10 13:48:49.868325] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:10.728 BaseBdev4 00:25:10.728 [2024-07-10 13:48:49.876840] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:25:10.728 [2024-07-10 13:48:49.876927] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:25:10.728 [2024-07-10 13:48:49.877160] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.728 13:48:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:10.728 13:48:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:25:10.728 13:48:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:25:10.728 13:48:49 -- common/autotest_common.sh@889 -- # local i 00:25:10.728 13:48:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:25:10.728 13:48:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:25:10.728 13:48:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:10.728 13:48:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:10.988 [ 00:25:10.988 { 00:25:10.988 "name": "BaseBdev4", 00:25:10.988 "aliases": [ 00:25:10.988 "436220a5-eaa1-4cb2-a971-1c72015b5f56" 00:25:10.988 ], 00:25:10.988 "product_name": "Malloc disk", 00:25:10.988 "block_size": 512, 00:25:10.988 "num_blocks": 65536, 00:25:10.988 "uuid": "436220a5-eaa1-4cb2-a971-1c72015b5f56", 00:25:10.988 "assigned_rate_limits": { 00:25:10.988 "rw_ios_per_sec": 0, 00:25:10.988 "rw_mbytes_per_sec": 0, 00:25:10.988 "r_mbytes_per_sec": 0, 00:25:10.988 "w_mbytes_per_sec": 0 00:25:10.988 }, 00:25:10.988 "claimed": true, 00:25:10.988 "claim_type": "exclusive_write", 00:25:10.988 "zoned": false, 00:25:10.988 "supported_io_types": { 00:25:10.988 "read": true, 00:25:10.988 "write": true, 00:25:10.988 "unmap": true, 00:25:10.988 "write_zeroes": true, 00:25:10.988 "flush": true, 00:25:10.988 "reset": true, 00:25:10.988 "compare": false, 00:25:10.988 "compare_and_write": false, 00:25:10.988 "abort": true, 00:25:10.988 "nvme_admin": false, 00:25:10.988 "nvme_io": false 00:25:10.988 }, 00:25:10.988 "memory_domains": [ 00:25:10.988 { 00:25:10.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.988 "dma_device_type": 2 00:25:10.988 } 00:25:10.988 ], 00:25:10.988 "driver_specific": {} 00:25:10.988 } 00:25:10.988 ] 00:25:10.988 13:48:50 -- common/autotest_common.sh@895 -- # return 0 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.988 13:48:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.251 13:48:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.251 "name": "Existed_Raid", 00:25:11.251 "uuid": "181d7598-814d-49e8-9cb0-6d655ae8992b", 00:25:11.251 "strip_size_kb": 64, 00:25:11.251 "state": "online", 00:25:11.251 "raid_level": "raid5f", 00:25:11.251 "superblock": true, 00:25:11.251 "num_base_bdevs": 4, 00:25:11.251 "num_base_bdevs_discovered": 4, 00:25:11.251 "num_base_bdevs_operational": 4, 00:25:11.251 "base_bdevs_list": [ 00:25:11.251 { 00:25:11.251 "name": "BaseBdev1", 00:25:11.251 "uuid": "0e442738-3b5a-4908-83b1-aadd9fb7e3f6", 00:25:11.251 "is_configured": true, 00:25:11.251 "data_offset": 2048, 00:25:11.251 "data_size": 63488 00:25:11.251 }, 00:25:11.251 { 00:25:11.251 "name": "BaseBdev2", 00:25:11.251 "uuid": "40a67a37-9235-407f-a55d-339130b66df3", 00:25:11.251 "is_configured": true, 00:25:11.251 "data_offset": 2048, 00:25:11.251 "data_size": 63488 00:25:11.251 }, 00:25:11.251 { 00:25:11.251 "name": "BaseBdev3", 00:25:11.251 "uuid": "e3afa409-221c-4f11-8b5d-45e78ca6e260", 00:25:11.251 "is_configured": true, 00:25:11.251 "data_offset": 2048, 00:25:11.251 "data_size": 63488 00:25:11.251 }, 00:25:11.251 { 00:25:11.251 "name": "BaseBdev4", 00:25:11.251 "uuid": "436220a5-eaa1-4cb2-a971-1c72015b5f56", 00:25:11.251 "is_configured": true, 00:25:11.251 "data_offset": 2048, 00:25:11.251 "data_size": 63488 00:25:11.251 } 00:25:11.251 ] 00:25:11.251 }' 00:25:11.251 13:48:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.251 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:25:11.819 13:48:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:12.079 [2024-07-10 13:48:51.336386] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.339 "name": "Existed_Raid", 00:25:12.339 "uuid": "181d7598-814d-49e8-9cb0-6d655ae8992b", 00:25:12.339 "strip_size_kb": 64, 00:25:12.339 "state": "online", 00:25:12.339 "raid_level": "raid5f", 00:25:12.339 "superblock": true, 00:25:12.339 "num_base_bdevs": 4, 00:25:12.339 "num_base_bdevs_discovered": 3, 00:25:12.339 "num_base_bdevs_operational": 3, 00:25:12.339 "base_bdevs_list": [ 00:25:12.339 { 00:25:12.339 "name": null, 00:25:12.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.339 "is_configured": false, 00:25:12.339 "data_offset": 2048, 00:25:12.339 "data_size": 63488 00:25:12.339 }, 00:25:12.339 { 00:25:12.339 "name": "BaseBdev2", 00:25:12.339 "uuid": "40a67a37-9235-407f-a55d-339130b66df3", 00:25:12.339 "is_configured": true, 00:25:12.339 "data_offset": 2048, 00:25:12.339 "data_size": 63488 00:25:12.339 }, 00:25:12.339 { 00:25:12.339 "name": "BaseBdev3", 00:25:12.339 "uuid": "e3afa409-221c-4f11-8b5d-45e78ca6e260", 00:25:12.339 "is_configured": true, 00:25:12.339 "data_offset": 2048, 00:25:12.339 "data_size": 63488 00:25:12.339 }, 00:25:12.339 { 00:25:12.339 "name": "BaseBdev4", 00:25:12.339 "uuid": "436220a5-eaa1-4cb2-a971-1c72015b5f56", 00:25:12.339 "is_configured": true, 00:25:12.339 "data_offset": 2048, 00:25:12.339 "data_size": 63488 00:25:12.339 } 00:25:12.339 ] 00:25:12.339 }' 00:25:12.339 13:48:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.339 13:48:51 -- common/autotest_common.sh@10 -- # set +x 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:13.278 13:48:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:13.538 [2024-07-10 13:48:52.741661] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:13.538 [2024-07-10 13:48:52.741771] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:13.538 [2024-07-10 13:48:52.741860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:13.538 13:48:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:13.538 13:48:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:13.538 13:48:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.538 13:48:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:13.806 13:48:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:13.806 13:48:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:13.806 13:48:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:14.069 [2024-07-10 13:48:53.247505] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:14.069 13:48:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:14.069 13:48:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:14.069 13:48:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.069 13:48:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:14.329 13:48:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:14.329 13:48:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:14.329 13:48:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:14.589 [2024-07-10 13:48:53.782359] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:14.589 [2024-07-10 13:48:53.782512] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:25:14.589 13:48:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:14.589 13:48:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:14.589 13:48:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:14.589 13:48:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.849 13:48:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:14.849 13:48:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:14.849 13:48:54 -- bdev/bdev_raid.sh@287 -- # killprocess 133685 00:25:14.849 13:48:54 -- common/autotest_common.sh@926 -- # '[' -z 133685 ']' 00:25:14.849 13:48:54 -- common/autotest_common.sh@930 -- # kill -0 133685 00:25:14.849 13:48:54 -- common/autotest_common.sh@931 -- # uname 00:25:14.849 13:48:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:14.849 13:48:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133685 00:25:14.849 killing process with pid 133685 00:25:14.849 13:48:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:14.849 13:48:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:14.849 13:48:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133685' 00:25:14.849 13:48:54 -- common/autotest_common.sh@945 -- # kill 133685 00:25:14.849 13:48:54 -- common/autotest_common.sh@950 -- # wait 133685 00:25:14.849 [2024-07-10 13:48:54.153757] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:14.849 [2024-07-10 13:48:54.153882] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:16.228 ************************************ 00:25:16.228 END TEST raid5f_state_function_test_sb 00:25:16.228 ************************************ 00:25:16.228 13:48:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:16.228 00:25:16.228 real 0m14.994s 00:25:16.228 user 0m26.327s 00:25:16.228 sys 0m1.640s 00:25:16.228 13:48:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.228 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:25:16.487 13:48:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:16.487 13:48:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:16.487 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:25:16.487 ************************************ 00:25:16.487 START TEST raid5f_superblock_test 00:25:16.487 ************************************ 00:25:16.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:16.487 13:48:55 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:16.487 13:48:55 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:16.488 13:48:55 -- bdev/bdev_raid.sh@357 -- # raid_pid=134155 00:25:16.488 13:48:55 -- bdev/bdev_raid.sh@358 -- # waitforlisten 134155 /var/tmp/spdk-raid.sock 00:25:16.488 13:48:55 -- common/autotest_common.sh@819 -- # '[' -z 134155 ']' 00:25:16.488 13:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:16.488 13:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:16.488 13:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:16.488 13:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:16.488 13:48:55 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:16.488 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:25:16.488 [2024-07-10 13:48:55.683853] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:16.488 [2024-07-10 13:48:55.684572] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134155 ] 00:25:16.770 [2024-07-10 13:48:55.846056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.770 [2024-07-10 13:48:56.072515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.061 [2024-07-10 13:48:56.294354] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.320 13:48:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:17.320 13:48:56 -- common/autotest_common.sh@852 -- # return 0 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:17.320 13:48:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:17.579 malloc1 00:25:17.579 13:48:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:17.838 [2024-07-10 13:48:56.983087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:17.838 [2024-07-10 13:48:56.983327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.838 [2024-07-10 13:48:56.983481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:17.838 [2024-07-10 13:48:56.983605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.838 [2024-07-10 13:48:56.985981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.838 [2024-07-10 13:48:56.986068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:17.838 pt1 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:17.838 13:48:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:18.097 malloc2 00:25:18.097 13:48:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:18.356 [2024-07-10 13:48:57.482608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:18.356 [2024-07-10 13:48:57.482790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.356 [2024-07-10 13:48:57.482857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:18.356 [2024-07-10 13:48:57.482946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.356 [2024-07-10 13:48:57.485173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.356 [2024-07-10 13:48:57.485264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:18.356 pt2 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:18.356 13:48:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:18.616 malloc3 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:18.616 [2024-07-10 13:48:57.953405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:18.616 [2024-07-10 13:48:57.953574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.616 [2024-07-10 13:48:57.953656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:18.616 [2024-07-10 13:48:57.953750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.616 [2024-07-10 13:48:57.956025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.616 [2024-07-10 13:48:57.956158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:18.616 pt3 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:18.616 13:48:57 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:18.875 malloc4 00:25:18.875 13:48:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:19.133 [2024-07-10 13:48:58.397777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:19.133 [2024-07-10 13:48:58.397931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.133 [2024-07-10 13:48:58.397981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:19.133 [2024-07-10 13:48:58.398029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.133 [2024-07-10 13:48:58.399918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.133 [2024-07-10 13:48:58.400006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:19.133 pt4 00:25:19.133 13:48:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:19.133 13:48:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:19.133 13:48:58 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:19.394 [2024-07-10 13:48:58.577565] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:19.394 [2024-07-10 13:48:58.579411] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:19.394 [2024-07-10 13:48:58.579532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:19.394 [2024-07-10 13:48:58.579621] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:19.394 [2024-07-10 13:48:58.579853] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:19.394 [2024-07-10 13:48:58.579892] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:19.394 [2024-07-10 13:48:58.580050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:19.394 [2024-07-10 13:48:58.587809] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:19.394 [2024-07-10 13:48:58.587873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:19.394 [2024-07-10 13:48:58.588094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.394 13:48:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.655 13:48:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.655 "name": "raid_bdev1", 00:25:19.655 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:19.655 "strip_size_kb": 64, 00:25:19.655 "state": "online", 00:25:19.655 "raid_level": "raid5f", 00:25:19.655 "superblock": true, 00:25:19.655 "num_base_bdevs": 4, 00:25:19.655 "num_base_bdevs_discovered": 4, 00:25:19.655 "num_base_bdevs_operational": 4, 00:25:19.655 "base_bdevs_list": [ 00:25:19.655 { 00:25:19.655 "name": "pt1", 00:25:19.655 "uuid": "bca4829f-cf04-5c68-b022-884a042797fe", 00:25:19.655 "is_configured": true, 00:25:19.655 "data_offset": 2048, 00:25:19.655 "data_size": 63488 00:25:19.655 }, 00:25:19.655 { 00:25:19.655 "name": "pt2", 00:25:19.655 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:19.655 "is_configured": true, 00:25:19.655 "data_offset": 2048, 00:25:19.655 "data_size": 63488 00:25:19.655 }, 00:25:19.655 { 00:25:19.655 "name": "pt3", 00:25:19.655 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:19.655 "is_configured": true, 00:25:19.655 "data_offset": 2048, 00:25:19.655 "data_size": 63488 00:25:19.655 }, 00:25:19.655 { 00:25:19.655 "name": "pt4", 00:25:19.655 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:19.655 "is_configured": true, 00:25:19.655 "data_offset": 2048, 00:25:19.655 "data_size": 63488 00:25:19.655 } 00:25:19.655 ] 00:25:19.655 }' 00:25:19.655 13:48:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.655 13:48:58 -- common/autotest_common.sh@10 -- # set +x 00:25:20.224 13:48:59 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:20.224 13:48:59 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:20.485 [2024-07-10 13:48:59.599759] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.485 13:48:59 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d27509e8-bdbd-4034-8e04-0f2b18c3b0ce 00:25:20.485 13:48:59 -- bdev/bdev_raid.sh@380 -- # '[' -z d27509e8-bdbd-4034-8e04-0f2b18c3b0ce ']' 00:25:20.485 13:48:59 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:20.485 [2024-07-10 13:48:59.791283] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.485 [2024-07-10 13:48:59.791379] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.485 [2024-07-10 13:48:59.791489] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.485 [2024-07-10 13:48:59.791624] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.485 [2024-07-10 13:48:59.791657] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:20.485 13:48:59 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.485 13:48:59 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:20.744 13:48:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:20.744 13:48:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:20.744 13:48:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:20.744 13:48:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:21.002 13:49:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.002 13:49:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:21.262 13:49:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.262 13:49:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:21.262 13:49:00 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:21.262 13:49:00 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:21.521 13:49:00 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:21.521 13:49:00 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:21.781 13:49:00 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:21.781 13:49:00 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:21.781 13:49:00 -- common/autotest_common.sh@640 -- # local es=0 00:25:21.781 13:49:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:21.781 13:49:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:21.781 13:49:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.781 13:49:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:21.781 13:49:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.781 13:49:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:21.781 13:49:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.781 13:49:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:21.781 13:49:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:21.781 13:49:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:21.781 [2024-07-10 13:49:01.116933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:21.781 [2024-07-10 13:49:01.118687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:21.781 [2024-07-10 13:49:01.118792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:21.781 [2024-07-10 13:49:01.118842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:21.781 [2024-07-10 13:49:01.118914] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:21.781 [2024-07-10 13:49:01.119016] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:21.781 [2024-07-10 13:49:01.119065] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:21.781 [2024-07-10 13:49:01.119133] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:25:21.781 [2024-07-10 13:49:01.119181] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.781 [2024-07-10 13:49:01.119208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:25:21.781 request: 00:25:21.781 { 00:25:21.781 "name": "raid_bdev1", 00:25:21.781 "raid_level": "raid5f", 00:25:21.781 "base_bdevs": [ 00:25:21.781 "malloc1", 00:25:21.781 "malloc2", 00:25:21.781 "malloc3", 00:25:21.781 "malloc4" 00:25:21.781 ], 00:25:21.781 "superblock": false, 00:25:21.781 "strip_size_kb": 64, 00:25:21.781 "method": "bdev_raid_create", 00:25:21.781 "req_id": 1 00:25:21.781 } 00:25:21.781 Got JSON-RPC error response 00:25:21.781 response: 00:25:21.781 { 00:25:21.781 "code": -17, 00:25:21.781 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:21.781 } 00:25:21.781 13:49:01 -- common/autotest_common.sh@643 -- # es=1 00:25:21.781 13:49:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:21.781 13:49:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:21.781 13:49:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:21.781 13:49:01 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.781 13:49:01 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:22.040 13:49:01 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:22.040 13:49:01 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:22.040 13:49:01 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:22.299 [2024-07-10 13:49:01.488242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:22.299 [2024-07-10 13:49:01.488424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.299 [2024-07-10 13:49:01.488466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:22.299 [2024-07-10 13:49:01.488515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.299 [2024-07-10 13:49:01.490634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.299 [2024-07-10 13:49:01.490738] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:22.299 [2024-07-10 13:49:01.490882] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:22.299 [2024-07-10 13:49:01.490963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:22.299 pt1 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.299 13:49:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.559 13:49:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:22.559 "name": "raid_bdev1", 00:25:22.559 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:22.559 "strip_size_kb": 64, 00:25:22.559 "state": "configuring", 00:25:22.559 "raid_level": "raid5f", 00:25:22.559 "superblock": true, 00:25:22.559 "num_base_bdevs": 4, 00:25:22.559 "num_base_bdevs_discovered": 1, 00:25:22.559 "num_base_bdevs_operational": 4, 00:25:22.559 "base_bdevs_list": [ 00:25:22.559 { 00:25:22.559 "name": "pt1", 00:25:22.559 "uuid": "bca4829f-cf04-5c68-b022-884a042797fe", 00:25:22.559 "is_configured": true, 00:25:22.559 "data_offset": 2048, 00:25:22.559 "data_size": 63488 00:25:22.559 }, 00:25:22.559 { 00:25:22.559 "name": null, 00:25:22.559 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:22.559 "is_configured": false, 00:25:22.559 "data_offset": 2048, 00:25:22.559 "data_size": 63488 00:25:22.559 }, 00:25:22.559 { 00:25:22.559 "name": null, 00:25:22.559 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:22.559 "is_configured": false, 00:25:22.559 "data_offset": 2048, 00:25:22.559 "data_size": 63488 00:25:22.559 }, 00:25:22.559 { 00:25:22.559 "name": null, 00:25:22.559 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:22.559 "is_configured": false, 00:25:22.559 "data_offset": 2048, 00:25:22.559 "data_size": 63488 00:25:22.559 } 00:25:22.559 ] 00:25:22.559 }' 00:25:22.559 13:49:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:22.559 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:23.128 13:49:02 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:25:23.128 13:49:02 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:23.128 [2024-07-10 13:49:02.422644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:23.128 [2024-07-10 13:49:02.422778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.128 [2024-07-10 13:49:02.422843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:23.128 [2024-07-10 13:49:02.422876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.128 [2024-07-10 13:49:02.423308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.128 [2024-07-10 13:49:02.423377] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:23.128 [2024-07-10 13:49:02.423510] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:23.128 [2024-07-10 13:49:02.423554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:23.128 pt2 00:25:23.128 13:49:02 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:23.387 [2024-07-10 13:49:02.618374] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.387 13:49:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.645 13:49:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:23.645 "name": "raid_bdev1", 00:25:23.645 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:23.645 "strip_size_kb": 64, 00:25:23.645 "state": "configuring", 00:25:23.645 "raid_level": "raid5f", 00:25:23.645 "superblock": true, 00:25:23.645 "num_base_bdevs": 4, 00:25:23.645 "num_base_bdevs_discovered": 1, 00:25:23.645 "num_base_bdevs_operational": 4, 00:25:23.645 "base_bdevs_list": [ 00:25:23.645 { 00:25:23.645 "name": "pt1", 00:25:23.645 "uuid": "bca4829f-cf04-5c68-b022-884a042797fe", 00:25:23.645 "is_configured": true, 00:25:23.645 "data_offset": 2048, 00:25:23.645 "data_size": 63488 00:25:23.645 }, 00:25:23.645 { 00:25:23.645 "name": null, 00:25:23.645 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:23.645 "is_configured": false, 00:25:23.645 "data_offset": 2048, 00:25:23.645 "data_size": 63488 00:25:23.645 }, 00:25:23.645 { 00:25:23.645 "name": null, 00:25:23.645 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:23.645 "is_configured": false, 00:25:23.645 "data_offset": 2048, 00:25:23.645 "data_size": 63488 00:25:23.645 }, 00:25:23.645 { 00:25:23.645 "name": null, 00:25:23.645 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:23.645 "is_configured": false, 00:25:23.645 "data_offset": 2048, 00:25:23.645 "data_size": 63488 00:25:23.645 } 00:25:23.645 ] 00:25:23.645 }' 00:25:23.645 13:49:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:23.645 13:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:24.211 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:24.211 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:24.211 13:49:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:24.211 [2024-07-10 13:49:03.560744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:24.211 [2024-07-10 13:49:03.560900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.211 [2024-07-10 13:49:03.560946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:24.211 [2024-07-10 13:49:03.560979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.211 [2024-07-10 13:49:03.561407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.211 [2024-07-10 13:49:03.561482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:24.211 [2024-07-10 13:49:03.561622] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:24.211 [2024-07-10 13:49:03.561665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:24.211 pt2 00:25:24.469 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:24.469 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:24.469 13:49:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:24.469 [2024-07-10 13:49:03.752432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:24.469 [2024-07-10 13:49:03.752574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.469 [2024-07-10 13:49:03.752632] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:24.469 [2024-07-10 13:49:03.752670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.469 [2024-07-10 13:49:03.753113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.469 [2024-07-10 13:49:03.753213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:24.469 [2024-07-10 13:49:03.753347] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:24.469 [2024-07-10 13:49:03.753388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:24.469 pt3 00:25:24.469 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:24.469 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:24.469 13:49:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:24.726 [2024-07-10 13:49:03.940102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:24.726 [2024-07-10 13:49:03.940259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.726 [2024-07-10 13:49:03.940307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:24.726 [2024-07-10 13:49:03.940347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.726 [2024-07-10 13:49:03.940789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.726 [2024-07-10 13:49:03.940874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:24.726 [2024-07-10 13:49:03.941028] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:24.726 [2024-07-10 13:49:03.941075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:24.726 [2024-07-10 13:49:03.941217] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:24.726 [2024-07-10 13:49:03.941253] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:24.726 [2024-07-10 13:49:03.941368] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:24.726 [2024-07-10 13:49:03.948521] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:24.726 [2024-07-10 13:49:03.948573] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:24.726 [2024-07-10 13:49:03.948747] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.726 pt4 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.727 13:49:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.985 13:49:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.985 "name": "raid_bdev1", 00:25:24.985 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:24.985 "strip_size_kb": 64, 00:25:24.985 "state": "online", 00:25:24.985 "raid_level": "raid5f", 00:25:24.985 "superblock": true, 00:25:24.985 "num_base_bdevs": 4, 00:25:24.985 "num_base_bdevs_discovered": 4, 00:25:24.985 "num_base_bdevs_operational": 4, 00:25:24.985 "base_bdevs_list": [ 00:25:24.985 { 00:25:24.985 "name": "pt1", 00:25:24.985 "uuid": "bca4829f-cf04-5c68-b022-884a042797fe", 00:25:24.985 "is_configured": true, 00:25:24.985 "data_offset": 2048, 00:25:24.985 "data_size": 63488 00:25:24.985 }, 00:25:24.985 { 00:25:24.985 "name": "pt2", 00:25:24.985 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:24.985 "is_configured": true, 00:25:24.985 "data_offset": 2048, 00:25:24.985 "data_size": 63488 00:25:24.985 }, 00:25:24.985 { 00:25:24.985 "name": "pt3", 00:25:24.985 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:24.985 "is_configured": true, 00:25:24.985 "data_offset": 2048, 00:25:24.985 "data_size": 63488 00:25:24.985 }, 00:25:24.985 { 00:25:24.985 "name": "pt4", 00:25:24.985 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:24.985 "is_configured": true, 00:25:24.985 "data_offset": 2048, 00:25:24.985 "data_size": 63488 00:25:24.985 } 00:25:24.985 ] 00:25:24.985 }' 00:25:24.985 13:49:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.985 13:49:04 -- common/autotest_common.sh@10 -- # set +x 00:25:25.554 13:49:04 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:25.554 13:49:04 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:25.813 [2024-07-10 13:49:04.928505] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.813 13:49:04 -- bdev/bdev_raid.sh@430 -- # '[' d27509e8-bdbd-4034-8e04-0f2b18c3b0ce '!=' d27509e8-bdbd-4034-8e04-0f2b18c3b0ce ']' 00:25:25.813 13:49:04 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:25.813 13:49:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:25.813 13:49:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:25.813 13:49:04 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:25.813 [2024-07-10 13:49:05.116127] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.813 13:49:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.072 13:49:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:26.072 "name": "raid_bdev1", 00:25:26.072 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:26.072 "strip_size_kb": 64, 00:25:26.072 "state": "online", 00:25:26.072 "raid_level": "raid5f", 00:25:26.072 "superblock": true, 00:25:26.072 "num_base_bdevs": 4, 00:25:26.072 "num_base_bdevs_discovered": 3, 00:25:26.072 "num_base_bdevs_operational": 3, 00:25:26.072 "base_bdevs_list": [ 00:25:26.072 { 00:25:26.072 "name": null, 00:25:26.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.072 "is_configured": false, 00:25:26.072 "data_offset": 2048, 00:25:26.072 "data_size": 63488 00:25:26.072 }, 00:25:26.072 { 00:25:26.072 "name": "pt2", 00:25:26.072 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:26.072 "is_configured": true, 00:25:26.072 "data_offset": 2048, 00:25:26.072 "data_size": 63488 00:25:26.072 }, 00:25:26.072 { 00:25:26.072 "name": "pt3", 00:25:26.072 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:26.072 "is_configured": true, 00:25:26.072 "data_offset": 2048, 00:25:26.072 "data_size": 63488 00:25:26.072 }, 00:25:26.072 { 00:25:26.072 "name": "pt4", 00:25:26.072 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:26.072 "is_configured": true, 00:25:26.072 "data_offset": 2048, 00:25:26.072 "data_size": 63488 00:25:26.072 } 00:25:26.072 ] 00:25:26.072 }' 00:25:26.072 13:49:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:26.072 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:25:26.651 13:49:05 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:26.910 [2024-07-10 13:49:06.102461] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.910 [2024-07-10 13:49:06.102563] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.910 [2024-07-10 13:49:06.102663] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.910 [2024-07-10 13:49:06.102747] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.910 [2024-07-10 13:49:06.102808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:26.910 13:49:06 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.910 13:49:06 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:27.169 13:49:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:27.427 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:27.427 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:27.427 13:49:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:27.686 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:27.686 13:49:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:27.686 13:49:06 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:27.686 13:49:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:27.686 13:49:06 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.945 [2024-07-10 13:49:07.132701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.945 [2024-07-10 13:49:07.132867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.945 [2024-07-10 13:49:07.132916] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:27.946 [2024-07-10 13:49:07.132977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.946 [2024-07-10 13:49:07.135243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.946 [2024-07-10 13:49:07.135354] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.946 [2024-07-10 13:49:07.135526] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:27.946 [2024-07-10 13:49:07.135603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.946 pt2 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.946 13:49:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.204 13:49:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.204 "name": "raid_bdev1", 00:25:28.204 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:28.204 "strip_size_kb": 64, 00:25:28.204 "state": "configuring", 00:25:28.204 "raid_level": "raid5f", 00:25:28.204 "superblock": true, 00:25:28.204 "num_base_bdevs": 4, 00:25:28.204 "num_base_bdevs_discovered": 1, 00:25:28.204 "num_base_bdevs_operational": 3, 00:25:28.204 "base_bdevs_list": [ 00:25:28.204 { 00:25:28.204 "name": null, 00:25:28.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.204 "is_configured": false, 00:25:28.204 "data_offset": 2048, 00:25:28.204 "data_size": 63488 00:25:28.204 }, 00:25:28.204 { 00:25:28.204 "name": "pt2", 00:25:28.204 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:28.204 "is_configured": true, 00:25:28.204 "data_offset": 2048, 00:25:28.204 "data_size": 63488 00:25:28.204 }, 00:25:28.204 { 00:25:28.204 "name": null, 00:25:28.204 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:28.204 "is_configured": false, 00:25:28.204 "data_offset": 2048, 00:25:28.204 "data_size": 63488 00:25:28.204 }, 00:25:28.204 { 00:25:28.204 "name": null, 00:25:28.204 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:28.204 "is_configured": false, 00:25:28.204 "data_offset": 2048, 00:25:28.204 "data_size": 63488 00:25:28.204 } 00:25:28.204 ] 00:25:28.204 }' 00:25:28.204 13:49:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.204 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.768 13:49:07 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:28.768 13:49:07 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:28.768 13:49:07 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:29.027 [2024-07-10 13:49:08.162906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:29.027 [2024-07-10 13:49:08.163078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.027 [2024-07-10 13:49:08.163139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:29.027 [2024-07-10 13:49:08.163193] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.027 [2024-07-10 13:49:08.163706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.027 [2024-07-10 13:49:08.163795] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:29.027 [2024-07-10 13:49:08.163952] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:29.027 [2024-07-10 13:49:08.164004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:29.027 pt3 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.027 13:49:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.283 13:49:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:29.283 "name": "raid_bdev1", 00:25:29.283 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:29.283 "strip_size_kb": 64, 00:25:29.283 "state": "configuring", 00:25:29.283 "raid_level": "raid5f", 00:25:29.283 "superblock": true, 00:25:29.283 "num_base_bdevs": 4, 00:25:29.283 "num_base_bdevs_discovered": 2, 00:25:29.283 "num_base_bdevs_operational": 3, 00:25:29.283 "base_bdevs_list": [ 00:25:29.283 { 00:25:29.283 "name": null, 00:25:29.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.283 "is_configured": false, 00:25:29.283 "data_offset": 2048, 00:25:29.283 "data_size": 63488 00:25:29.283 }, 00:25:29.283 { 00:25:29.283 "name": "pt2", 00:25:29.283 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:29.283 "is_configured": true, 00:25:29.283 "data_offset": 2048, 00:25:29.283 "data_size": 63488 00:25:29.283 }, 00:25:29.283 { 00:25:29.283 "name": "pt3", 00:25:29.283 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:29.283 "is_configured": true, 00:25:29.283 "data_offset": 2048, 00:25:29.283 "data_size": 63488 00:25:29.283 }, 00:25:29.283 { 00:25:29.283 "name": null, 00:25:29.283 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:29.283 "is_configured": false, 00:25:29.283 "data_offset": 2048, 00:25:29.283 "data_size": 63488 00:25:29.283 } 00:25:29.283 ] 00:25:29.283 }' 00:25:29.283 13:49:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:29.283 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:25:29.847 13:49:08 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:29.847 13:49:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:29.847 13:49:08 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:29.847 13:49:08 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:29.847 [2024-07-10 13:49:09.189200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:29.847 [2024-07-10 13:49:09.189381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.847 [2024-07-10 13:49:09.189443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:29.847 [2024-07-10 13:49:09.189492] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.847 [2024-07-10 13:49:09.190029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.847 [2024-07-10 13:49:09.190098] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:29.847 [2024-07-10 13:49:09.190282] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:29.847 [2024-07-10 13:49:09.190336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:29.847 [2024-07-10 13:49:09.190487] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:25:29.847 [2024-07-10 13:49:09.190524] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:29.847 [2024-07-10 13:49:09.190673] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:29.847 [2024-07-10 13:49:09.198737] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:25:29.847 [2024-07-10 13:49:09.198799] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:25:29.847 [2024-07-10 13:49:09.199128] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.848 pt4 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.105 "name": "raid_bdev1", 00:25:30.105 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:30.105 "strip_size_kb": 64, 00:25:30.105 "state": "online", 00:25:30.105 "raid_level": "raid5f", 00:25:30.105 "superblock": true, 00:25:30.105 "num_base_bdevs": 4, 00:25:30.105 "num_base_bdevs_discovered": 3, 00:25:30.105 "num_base_bdevs_operational": 3, 00:25:30.105 "base_bdevs_list": [ 00:25:30.105 { 00:25:30.105 "name": null, 00:25:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.105 "is_configured": false, 00:25:30.105 "data_offset": 2048, 00:25:30.105 "data_size": 63488 00:25:30.105 }, 00:25:30.105 { 00:25:30.105 "name": "pt2", 00:25:30.105 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:30.105 "is_configured": true, 00:25:30.105 "data_offset": 2048, 00:25:30.105 "data_size": 63488 00:25:30.105 }, 00:25:30.105 { 00:25:30.105 "name": "pt3", 00:25:30.105 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:30.105 "is_configured": true, 00:25:30.105 "data_offset": 2048, 00:25:30.105 "data_size": 63488 00:25:30.105 }, 00:25:30.105 { 00:25:30.105 "name": "pt4", 00:25:30.105 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:30.105 "is_configured": true, 00:25:30.105 "data_offset": 2048, 00:25:30.105 "data_size": 63488 00:25:30.105 } 00:25:30.105 ] 00:25:30.105 }' 00:25:30.105 13:49:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.105 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.732 13:49:10 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:30.732 13:49:10 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:30.990 [2024-07-10 13:49:10.264442] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:30.990 [2024-07-10 13:49:10.264529] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:30.990 [2024-07-10 13:49:10.264640] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.990 [2024-07-10 13:49:10.264723] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:30.990 [2024-07-10 13:49:10.264788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:30.990 13:49:10 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.990 13:49:10 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:31.248 13:49:10 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:31.248 13:49:10 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:31.248 13:49:10 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:31.506 [2024-07-10 13:49:10.667939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:31.506 [2024-07-10 13:49:10.668076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.506 [2024-07-10 13:49:10.668133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:31.506 [2024-07-10 13:49:10.668188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.506 [2024-07-10 13:49:10.670252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.506 [2024-07-10 13:49:10.670359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:31.506 [2024-07-10 13:49:10.670500] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:31.506 [2024-07-10 13:49:10.670573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:31.506 pt1 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.506 "name": "raid_bdev1", 00:25:31.506 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:31.506 "strip_size_kb": 64, 00:25:31.506 "state": "configuring", 00:25:31.506 "raid_level": "raid5f", 00:25:31.506 "superblock": true, 00:25:31.506 "num_base_bdevs": 4, 00:25:31.506 "num_base_bdevs_discovered": 1, 00:25:31.506 "num_base_bdevs_operational": 4, 00:25:31.506 "base_bdevs_list": [ 00:25:31.506 { 00:25:31.506 "name": "pt1", 00:25:31.506 "uuid": "bca4829f-cf04-5c68-b022-884a042797fe", 00:25:31.506 "is_configured": true, 00:25:31.506 "data_offset": 2048, 00:25:31.506 "data_size": 63488 00:25:31.506 }, 00:25:31.506 { 00:25:31.506 "name": null, 00:25:31.506 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:31.506 "is_configured": false, 00:25:31.506 "data_offset": 2048, 00:25:31.506 "data_size": 63488 00:25:31.506 }, 00:25:31.506 { 00:25:31.506 "name": null, 00:25:31.506 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:31.506 "is_configured": false, 00:25:31.506 "data_offset": 2048, 00:25:31.506 "data_size": 63488 00:25:31.506 }, 00:25:31.506 { 00:25:31.506 "name": null, 00:25:31.506 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:31.506 "is_configured": false, 00:25:31.506 "data_offset": 2048, 00:25:31.506 "data_size": 63488 00:25:31.506 } 00:25:31.506 ] 00:25:31.506 }' 00:25:31.506 13:49:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.506 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:25:32.442 13:49:11 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:32.442 13:49:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:32.442 13:49:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:32.442 13:49:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:32.442 13:49:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:32.442 13:49:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:32.701 13:49:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:32.701 13:49:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:32.701 13:49:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:32.701 13:49:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:32.701 13:49:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:32.701 13:49:12 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:32.701 13:49:12 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:32.960 [2024-07-10 13:49:12.215465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:32.960 [2024-07-10 13:49:12.215972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.960 [2024-07-10 13:49:12.216123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:25:32.960 [2024-07-10 13:49:12.216245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.960 [2024-07-10 13:49:12.216753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.960 [2024-07-10 13:49:12.216921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:32.960 [2024-07-10 13:49:12.217127] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:32.960 [2024-07-10 13:49:12.217170] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:32.960 [2024-07-10 13:49:12.217192] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:32.960 [2024-07-10 13:49:12.217219] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:25:32.960 [2024-07-10 13:49:12.217337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:32.960 pt4 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.960 13:49:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.218 13:49:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:33.218 "name": "raid_bdev1", 00:25:33.218 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:33.218 "strip_size_kb": 64, 00:25:33.218 "state": "configuring", 00:25:33.218 "raid_level": "raid5f", 00:25:33.218 "superblock": true, 00:25:33.218 "num_base_bdevs": 4, 00:25:33.218 "num_base_bdevs_discovered": 1, 00:25:33.218 "num_base_bdevs_operational": 3, 00:25:33.218 "base_bdevs_list": [ 00:25:33.218 { 00:25:33.218 "name": null, 00:25:33.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.218 "is_configured": false, 00:25:33.218 "data_offset": 2048, 00:25:33.218 "data_size": 63488 00:25:33.218 }, 00:25:33.218 { 00:25:33.218 "name": null, 00:25:33.218 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:33.218 "is_configured": false, 00:25:33.218 "data_offset": 2048, 00:25:33.218 "data_size": 63488 00:25:33.218 }, 00:25:33.218 { 00:25:33.218 "name": null, 00:25:33.218 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:33.218 "is_configured": false, 00:25:33.218 "data_offset": 2048, 00:25:33.218 "data_size": 63488 00:25:33.218 }, 00:25:33.218 { 00:25:33.218 "name": "pt4", 00:25:33.218 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:33.218 "is_configured": true, 00:25:33.218 "data_offset": 2048, 00:25:33.218 "data_size": 63488 00:25:33.218 } 00:25:33.218 ] 00:25:33.218 }' 00:25:33.218 13:49:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:33.218 13:49:12 -- common/autotest_common.sh@10 -- # set +x 00:25:33.790 13:49:12 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:33.790 13:49:12 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:33.790 13:49:12 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:34.049 [2024-07-10 13:49:13.149874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:34.049 [2024-07-10 13:49:13.150073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.049 [2024-07-10 13:49:13.150130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:34.049 [2024-07-10 13:49:13.150204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.049 [2024-07-10 13:49:13.150685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.049 [2024-07-10 13:49:13.150775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:34.049 [2024-07-10 13:49:13.150923] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:34.049 [2024-07-10 13:49:13.150972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:34.049 pt2 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:34.049 [2024-07-10 13:49:13.341498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:34.049 [2024-07-10 13:49:13.341636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.049 [2024-07-10 13:49:13.341679] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:25:34.049 [2024-07-10 13:49:13.341714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.049 [2024-07-10 13:49:13.342145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.049 [2024-07-10 13:49:13.342235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:34.049 [2024-07-10 13:49:13.342377] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:34.049 [2024-07-10 13:49:13.342423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:34.049 [2024-07-10 13:49:13.342548] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:25:34.049 [2024-07-10 13:49:13.342579] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:34.049 [2024-07-10 13:49:13.342690] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:34.049 [2024-07-10 13:49:13.349900] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:25:34.049 [2024-07-10 13:49:13.349954] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:25:34.049 [2024-07-10 13:49:13.350202] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.049 pt3 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.049 13:49:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.309 13:49:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:34.309 "name": "raid_bdev1", 00:25:34.309 "uuid": "d27509e8-bdbd-4034-8e04-0f2b18c3b0ce", 00:25:34.309 "strip_size_kb": 64, 00:25:34.309 "state": "online", 00:25:34.309 "raid_level": "raid5f", 00:25:34.309 "superblock": true, 00:25:34.309 "num_base_bdevs": 4, 00:25:34.309 "num_base_bdevs_discovered": 3, 00:25:34.309 "num_base_bdevs_operational": 3, 00:25:34.309 "base_bdevs_list": [ 00:25:34.309 { 00:25:34.309 "name": null, 00:25:34.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.309 "is_configured": false, 00:25:34.309 "data_offset": 2048, 00:25:34.309 "data_size": 63488 00:25:34.309 }, 00:25:34.309 { 00:25:34.309 "name": "pt2", 00:25:34.309 "uuid": "d0296eec-22d3-5102-a4e4-e30750bcf181", 00:25:34.309 "is_configured": true, 00:25:34.309 "data_offset": 2048, 00:25:34.309 "data_size": 63488 00:25:34.309 }, 00:25:34.309 { 00:25:34.309 "name": "pt3", 00:25:34.309 "uuid": "5665a18f-33de-5776-8554-55f19e539dc2", 00:25:34.309 "is_configured": true, 00:25:34.309 "data_offset": 2048, 00:25:34.309 "data_size": 63488 00:25:34.309 }, 00:25:34.309 { 00:25:34.309 "name": "pt4", 00:25:34.309 "uuid": "de50b7af-b567-5cf8-9998-11edbe9fe53d", 00:25:34.309 "is_configured": true, 00:25:34.309 "data_offset": 2048, 00:25:34.309 "data_size": 63488 00:25:34.309 } 00:25:34.309 ] 00:25:34.309 }' 00:25:34.309 13:49:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:34.309 13:49:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.878 13:49:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:34.878 13:49:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:35.137 [2024-07-10 13:49:14.257518] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.137 13:49:14 -- bdev/bdev_raid.sh@506 -- # '[' d27509e8-bdbd-4034-8e04-0f2b18c3b0ce '!=' d27509e8-bdbd-4034-8e04-0f2b18c3b0ce ']' 00:25:35.137 13:49:14 -- bdev/bdev_raid.sh@511 -- # killprocess 134155 00:25:35.137 13:49:14 -- common/autotest_common.sh@926 -- # '[' -z 134155 ']' 00:25:35.137 13:49:14 -- common/autotest_common.sh@930 -- # kill -0 134155 00:25:35.137 13:49:14 -- common/autotest_common.sh@931 -- # uname 00:25:35.137 13:49:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:35.137 13:49:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134155 00:25:35.137 killing process with pid 134155 00:25:35.137 13:49:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:35.137 13:49:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:35.137 13:49:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134155' 00:25:35.137 13:49:14 -- common/autotest_common.sh@945 -- # kill 134155 00:25:35.137 13:49:14 -- common/autotest_common.sh@950 -- # wait 134155 00:25:35.137 [2024-07-10 13:49:14.297862] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:35.137 [2024-07-10 13:49:14.297950] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.138 [2024-07-10 13:49:14.298048] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.138 [2024-07-10 13:49:14.298076] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:25:35.397 [2024-07-10 13:49:14.689037] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:36.812 ************************************ 00:25:36.812 END TEST raid5f_superblock_test 00:25:36.812 ************************************ 00:25:36.812 13:49:15 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:36.812 00:25:36.812 real 0m20.378s 00:25:36.812 user 0m37.066s 00:25:36.812 sys 0m2.360s 00:25:36.812 13:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.812 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:36.812 13:49:16 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:36.812 13:49:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:36.812 13:49:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.812 ************************************ 00:25:36.812 START TEST raid5f_rebuild_test 00:25:36.812 ************************************ 00:25:36.812 13:49:16 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@544 -- # raid_pid=134844 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134844 /var/tmp/spdk-raid.sock 00:25:36.812 13:49:16 -- common/autotest_common.sh@819 -- # '[' -z 134844 ']' 00:25:36.812 13:49:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:36.812 13:49:16 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:36.812 13:49:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.812 13:49:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:36.812 13:49:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.812 13:49:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.812 [2024-07-10 13:49:16.134908] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:36.812 [2024-07-10 13:49:16.135557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134844 ] 00:25:36.812 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:36.812 Zero copy mechanism will not be used. 00:25:37.073 [2024-07-10 13:49:16.302615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.333 [2024-07-10 13:49:16.505588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.593 [2024-07-10 13:49:16.690475] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:37.852 13:49:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:37.852 13:49:16 -- common/autotest_common.sh@852 -- # return 0 00:25:37.852 13:49:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:37.852 13:49:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:37.852 13:49:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:38.111 BaseBdev1 00:25:38.111 13:49:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:38.111 13:49:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:38.111 13:49:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:38.369 BaseBdev2 00:25:38.369 13:49:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:38.369 13:49:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:38.369 13:49:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:38.369 BaseBdev3 00:25:38.629 13:49:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:38.629 13:49:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:38.629 13:49:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:38.629 BaseBdev4 00:25:38.629 13:49:17 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:38.888 spare_malloc 00:25:38.888 13:49:18 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:39.146 spare_delay 00:25:39.146 13:49:18 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:39.146 [2024-07-10 13:49:18.443122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:39.146 [2024-07-10 13:49:18.443273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.146 [2024-07-10 13:49:18.443319] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:39.146 [2024-07-10 13:49:18.443389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.146 [2024-07-10 13:49:18.445500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.146 [2024-07-10 13:49:18.445576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:39.146 spare 00:25:39.146 13:49:18 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:39.405 [2024-07-10 13:49:18.622876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:39.405 [2024-07-10 13:49:18.624628] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:39.405 [2024-07-10 13:49:18.624711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:39.405 [2024-07-10 13:49:18.624763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:39.405 [2024-07-10 13:49:18.624858] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:39.405 [2024-07-10 13:49:18.624891] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:39.405 [2024-07-10 13:49:18.625070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:39.405 [2024-07-10 13:49:18.633672] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:39.405 [2024-07-10 13:49:18.633724] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:25:39.405 [2024-07-10 13:49:18.633997] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.405 13:49:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.664 13:49:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:39.664 "name": "raid_bdev1", 00:25:39.664 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:39.664 "strip_size_kb": 64, 00:25:39.664 "state": "online", 00:25:39.664 "raid_level": "raid5f", 00:25:39.664 "superblock": false, 00:25:39.664 "num_base_bdevs": 4, 00:25:39.664 "num_base_bdevs_discovered": 4, 00:25:39.664 "num_base_bdevs_operational": 4, 00:25:39.664 "base_bdevs_list": [ 00:25:39.664 { 00:25:39.664 "name": "BaseBdev1", 00:25:39.664 "uuid": "5ec856e2-d1d2-4e97-a4d0-25cd8c305a26", 00:25:39.664 "is_configured": true, 00:25:39.664 "data_offset": 0, 00:25:39.664 "data_size": 65536 00:25:39.664 }, 00:25:39.665 { 00:25:39.665 "name": "BaseBdev2", 00:25:39.665 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:39.665 "is_configured": true, 00:25:39.665 "data_offset": 0, 00:25:39.665 "data_size": 65536 00:25:39.665 }, 00:25:39.665 { 00:25:39.665 "name": "BaseBdev3", 00:25:39.665 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:39.665 "is_configured": true, 00:25:39.665 "data_offset": 0, 00:25:39.665 "data_size": 65536 00:25:39.665 }, 00:25:39.665 { 00:25:39.665 "name": "BaseBdev4", 00:25:39.665 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:39.665 "is_configured": true, 00:25:39.665 "data_offset": 0, 00:25:39.665 "data_size": 65536 00:25:39.665 } 00:25:39.665 ] 00:25:39.665 }' 00:25:39.665 13:49:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:39.665 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:25:40.233 13:49:19 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:40.233 13:49:19 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:40.233 [2024-07-10 13:49:19.572733] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.233 13:49:19 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:40.491 13:49:19 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.491 13:49:19 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:40.491 13:49:19 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:40.491 13:49:19 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:40.491 13:49:19 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:40.491 13:49:19 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@12 -- # local i 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:40.491 13:49:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:40.750 [2024-07-10 13:49:19.955908] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:40.750 /dev/nbd0 00:25:40.750 13:49:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:40.750 13:49:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:40.750 13:49:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:40.750 13:49:20 -- common/autotest_common.sh@857 -- # local i 00:25:40.750 13:49:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:40.750 13:49:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:40.750 13:49:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:40.750 13:49:20 -- common/autotest_common.sh@861 -- # break 00:25:40.750 13:49:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:40.750 13:49:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:40.750 13:49:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:40.750 1+0 records in 00:25:40.750 1+0 records out 00:25:40.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338026 s, 12.1 MB/s 00:25:40.750 13:49:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:40.750 13:49:20 -- common/autotest_common.sh@874 -- # size=4096 00:25:40.750 13:49:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:40.750 13:49:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:40.750 13:49:20 -- common/autotest_common.sh@877 -- # return 0 00:25:40.750 13:49:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:40.750 13:49:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:40.750 13:49:20 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:40.750 13:49:20 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:40.750 13:49:20 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:40.750 13:49:20 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:41.321 512+0 records in 00:25:41.321 512+0 records out 00:25:41.321 100663296 bytes (101 MB, 96 MiB) copied, 0.439254 s, 229 MB/s 00:25:41.321 13:49:20 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@51 -- # local i 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:41.321 13:49:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:41.321 [2024-07-10 13:49:20.667274] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.581 13:49:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:41.581 13:49:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:41.581 13:49:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:41.581 13:49:20 -- bdev/nbd_common.sh@41 -- # break 00:25:41.581 13:49:20 -- bdev/nbd_common.sh@45 -- # return 0 00:25:41.581 13:49:20 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:41.841 [2024-07-10 13:49:20.939177] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.841 13:49:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.841 13:49:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:41.841 "name": "raid_bdev1", 00:25:41.841 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:41.841 "strip_size_kb": 64, 00:25:41.841 "state": "online", 00:25:41.841 "raid_level": "raid5f", 00:25:41.841 "superblock": false, 00:25:41.841 "num_base_bdevs": 4, 00:25:41.841 "num_base_bdevs_discovered": 3, 00:25:41.841 "num_base_bdevs_operational": 3, 00:25:41.841 "base_bdevs_list": [ 00:25:41.841 { 00:25:41.841 "name": null, 00:25:41.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.841 "is_configured": false, 00:25:41.841 "data_offset": 0, 00:25:41.841 "data_size": 65536 00:25:41.841 }, 00:25:41.841 { 00:25:41.841 "name": "BaseBdev2", 00:25:41.841 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:41.841 "is_configured": true, 00:25:41.841 "data_offset": 0, 00:25:41.841 "data_size": 65536 00:25:41.841 }, 00:25:41.841 { 00:25:41.841 "name": "BaseBdev3", 00:25:41.841 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:41.841 "is_configured": true, 00:25:41.841 "data_offset": 0, 00:25:41.841 "data_size": 65536 00:25:41.841 }, 00:25:41.841 { 00:25:41.841 "name": "BaseBdev4", 00:25:41.841 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:41.841 "is_configured": true, 00:25:41.841 "data_offset": 0, 00:25:41.841 "data_size": 65536 00:25:41.841 } 00:25:41.841 ] 00:25:41.841 }' 00:25:41.841 13:49:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:41.841 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.409 13:49:21 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:42.668 [2024-07-10 13:49:21.853636] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:42.668 [2024-07-10 13:49:21.853761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:42.668 [2024-07-10 13:49:21.867904] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:25:42.668 [2024-07-10 13:49:21.875849] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:42.668 13:49:21 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.724 13:49:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.039 13:49:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:44.039 "name": "raid_bdev1", 00:25:44.039 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:44.039 "strip_size_kb": 64, 00:25:44.039 "state": "online", 00:25:44.039 "raid_level": "raid5f", 00:25:44.039 "superblock": false, 00:25:44.039 "num_base_bdevs": 4, 00:25:44.039 "num_base_bdevs_discovered": 4, 00:25:44.039 "num_base_bdevs_operational": 4, 00:25:44.039 "process": { 00:25:44.039 "type": "rebuild", 00:25:44.039 "target": "spare", 00:25:44.039 "progress": { 00:25:44.039 "blocks": 21120, 00:25:44.039 "percent": 10 00:25:44.039 } 00:25:44.039 }, 00:25:44.039 "base_bdevs_list": [ 00:25:44.039 { 00:25:44.039 "name": "spare", 00:25:44.039 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:44.040 "is_configured": true, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 65536 00:25:44.040 }, 00:25:44.040 { 00:25:44.040 "name": "BaseBdev2", 00:25:44.040 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:44.040 "is_configured": true, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 65536 00:25:44.040 }, 00:25:44.040 { 00:25:44.040 "name": "BaseBdev3", 00:25:44.040 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:44.040 "is_configured": true, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 65536 00:25:44.040 }, 00:25:44.040 { 00:25:44.040 "name": "BaseBdev4", 00:25:44.040 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:44.040 "is_configured": true, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 65536 00:25:44.040 } 00:25:44.040 ] 00:25:44.040 }' 00:25:44.040 13:49:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:44.040 13:49:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.040 13:49:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:44.040 13:49:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.040 13:49:23 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:44.040 [2024-07-10 13:49:23.382213] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.040 [2024-07-10 13:49:23.383497] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:44.040 [2024-07-10 13:49:23.383605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:44.299 "name": "raid_bdev1", 00:25:44.299 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:44.299 "strip_size_kb": 64, 00:25:44.299 "state": "online", 00:25:44.299 "raid_level": "raid5f", 00:25:44.299 "superblock": false, 00:25:44.299 "num_base_bdevs": 4, 00:25:44.299 "num_base_bdevs_discovered": 3, 00:25:44.299 "num_base_bdevs_operational": 3, 00:25:44.299 "base_bdevs_list": [ 00:25:44.299 { 00:25:44.299 "name": null, 00:25:44.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.299 "is_configured": false, 00:25:44.299 "data_offset": 0, 00:25:44.299 "data_size": 65536 00:25:44.299 }, 00:25:44.299 { 00:25:44.299 "name": "BaseBdev2", 00:25:44.299 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:44.299 "is_configured": true, 00:25:44.299 "data_offset": 0, 00:25:44.299 "data_size": 65536 00:25:44.299 }, 00:25:44.299 { 00:25:44.299 "name": "BaseBdev3", 00:25:44.299 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:44.299 "is_configured": true, 00:25:44.299 "data_offset": 0, 00:25:44.299 "data_size": 65536 00:25:44.299 }, 00:25:44.299 { 00:25:44.299 "name": "BaseBdev4", 00:25:44.299 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:44.299 "is_configured": true, 00:25:44.299 "data_offset": 0, 00:25:44.299 "data_size": 65536 00:25:44.299 } 00:25:44.299 ] 00:25:44.299 }' 00:25:44.299 13:49:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:44.299 13:49:23 -- common/autotest_common.sh@10 -- # set +x 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:45.235 "name": "raid_bdev1", 00:25:45.235 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:45.235 "strip_size_kb": 64, 00:25:45.235 "state": "online", 00:25:45.235 "raid_level": "raid5f", 00:25:45.235 "superblock": false, 00:25:45.235 "num_base_bdevs": 4, 00:25:45.235 "num_base_bdevs_discovered": 3, 00:25:45.235 "num_base_bdevs_operational": 3, 00:25:45.235 "base_bdevs_list": [ 00:25:45.235 { 00:25:45.235 "name": null, 00:25:45.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.235 "is_configured": false, 00:25:45.235 "data_offset": 0, 00:25:45.235 "data_size": 65536 00:25:45.235 }, 00:25:45.235 { 00:25:45.235 "name": "BaseBdev2", 00:25:45.235 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:45.235 "is_configured": true, 00:25:45.235 "data_offset": 0, 00:25:45.235 "data_size": 65536 00:25:45.235 }, 00:25:45.235 { 00:25:45.235 "name": "BaseBdev3", 00:25:45.235 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:45.235 "is_configured": true, 00:25:45.235 "data_offset": 0, 00:25:45.235 "data_size": 65536 00:25:45.235 }, 00:25:45.235 { 00:25:45.235 "name": "BaseBdev4", 00:25:45.235 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:45.235 "is_configured": true, 00:25:45.235 "data_offset": 0, 00:25:45.235 "data_size": 65536 00:25:45.235 } 00:25:45.235 ] 00:25:45.235 }' 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:45.235 13:49:24 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:45.495 [2024-07-10 13:49:24.740380] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:45.495 [2024-07-10 13:49:24.740463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:45.495 [2024-07-10 13:49:24.754153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:25:45.495 [2024-07-10 13:49:24.762086] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:45.495 13:49:24 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.431 13:49:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.690 13:49:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:46.690 "name": "raid_bdev1", 00:25:46.690 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:46.690 "strip_size_kb": 64, 00:25:46.690 "state": "online", 00:25:46.690 "raid_level": "raid5f", 00:25:46.690 "superblock": false, 00:25:46.690 "num_base_bdevs": 4, 00:25:46.690 "num_base_bdevs_discovered": 4, 00:25:46.690 "num_base_bdevs_operational": 4, 00:25:46.690 "process": { 00:25:46.690 "type": "rebuild", 00:25:46.690 "target": "spare", 00:25:46.690 "progress": { 00:25:46.690 "blocks": 23040, 00:25:46.690 "percent": 11 00:25:46.690 } 00:25:46.690 }, 00:25:46.690 "base_bdevs_list": [ 00:25:46.690 { 00:25:46.690 "name": "spare", 00:25:46.690 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:46.690 "is_configured": true, 00:25:46.690 "data_offset": 0, 00:25:46.690 "data_size": 65536 00:25:46.690 }, 00:25:46.690 { 00:25:46.690 "name": "BaseBdev2", 00:25:46.690 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:46.690 "is_configured": true, 00:25:46.690 "data_offset": 0, 00:25:46.690 "data_size": 65536 00:25:46.690 }, 00:25:46.690 { 00:25:46.690 "name": "BaseBdev3", 00:25:46.690 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:46.690 "is_configured": true, 00:25:46.690 "data_offset": 0, 00:25:46.690 "data_size": 65536 00:25:46.690 }, 00:25:46.690 { 00:25:46.690 "name": "BaseBdev4", 00:25:46.690 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:46.690 "is_configured": true, 00:25:46.690 "data_offset": 0, 00:25:46.690 "data_size": 65536 00:25:46.690 } 00:25:46.690 ] 00:25:46.690 }' 00:25:46.690 13:49:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:46.690 13:49:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.690 13:49:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@657 -- # local timeout=681 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:46.949 "name": "raid_bdev1", 00:25:46.949 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:46.949 "strip_size_kb": 64, 00:25:46.949 "state": "online", 00:25:46.949 "raid_level": "raid5f", 00:25:46.949 "superblock": false, 00:25:46.949 "num_base_bdevs": 4, 00:25:46.949 "num_base_bdevs_discovered": 4, 00:25:46.949 "num_base_bdevs_operational": 4, 00:25:46.949 "process": { 00:25:46.949 "type": "rebuild", 00:25:46.949 "target": "spare", 00:25:46.949 "progress": { 00:25:46.949 "blocks": 26880, 00:25:46.949 "percent": 13 00:25:46.949 } 00:25:46.949 }, 00:25:46.949 "base_bdevs_list": [ 00:25:46.949 { 00:25:46.949 "name": "spare", 00:25:46.949 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:46.949 "is_configured": true, 00:25:46.949 "data_offset": 0, 00:25:46.949 "data_size": 65536 00:25:46.949 }, 00:25:46.949 { 00:25:46.949 "name": "BaseBdev2", 00:25:46.949 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:46.949 "is_configured": true, 00:25:46.949 "data_offset": 0, 00:25:46.949 "data_size": 65536 00:25:46.949 }, 00:25:46.949 { 00:25:46.949 "name": "BaseBdev3", 00:25:46.949 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:46.949 "is_configured": true, 00:25:46.949 "data_offset": 0, 00:25:46.949 "data_size": 65536 00:25:46.949 }, 00:25:46.949 { 00:25:46.949 "name": "BaseBdev4", 00:25:46.949 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:46.949 "is_configured": true, 00:25:46.949 "data_offset": 0, 00:25:46.949 "data_size": 65536 00:25:46.949 } 00:25:46.949 ] 00:25:46.949 }' 00:25:46.949 13:49:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:47.208 13:49:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.208 13:49:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:47.208 13:49:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.208 13:49:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:48.145 13:49:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:48.145 13:49:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.145 13:49:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:48.145 13:49:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:48.145 13:49:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:48.146 13:49:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:48.146 13:49:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.146 13:49:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.406 13:49:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:48.406 "name": "raid_bdev1", 00:25:48.406 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:48.406 "strip_size_kb": 64, 00:25:48.406 "state": "online", 00:25:48.406 "raid_level": "raid5f", 00:25:48.406 "superblock": false, 00:25:48.406 "num_base_bdevs": 4, 00:25:48.406 "num_base_bdevs_discovered": 4, 00:25:48.406 "num_base_bdevs_operational": 4, 00:25:48.406 "process": { 00:25:48.406 "type": "rebuild", 00:25:48.406 "target": "spare", 00:25:48.406 "progress": { 00:25:48.406 "blocks": 51840, 00:25:48.406 "percent": 26 00:25:48.406 } 00:25:48.406 }, 00:25:48.406 "base_bdevs_list": [ 00:25:48.406 { 00:25:48.406 "name": "spare", 00:25:48.406 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:48.406 "is_configured": true, 00:25:48.406 "data_offset": 0, 00:25:48.406 "data_size": 65536 00:25:48.406 }, 00:25:48.406 { 00:25:48.406 "name": "BaseBdev2", 00:25:48.406 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:48.406 "is_configured": true, 00:25:48.406 "data_offset": 0, 00:25:48.406 "data_size": 65536 00:25:48.406 }, 00:25:48.406 { 00:25:48.406 "name": "BaseBdev3", 00:25:48.406 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:48.406 "is_configured": true, 00:25:48.406 "data_offset": 0, 00:25:48.406 "data_size": 65536 00:25:48.406 }, 00:25:48.406 { 00:25:48.406 "name": "BaseBdev4", 00:25:48.406 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:48.406 "is_configured": true, 00:25:48.406 "data_offset": 0, 00:25:48.406 "data_size": 65536 00:25:48.406 } 00:25:48.406 ] 00:25:48.406 }' 00:25:48.406 13:49:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:48.406 13:49:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.406 13:49:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:48.406 13:49:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.406 13:49:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.426 13:49:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.689 13:49:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:49.689 "name": "raid_bdev1", 00:25:49.689 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:49.689 "strip_size_kb": 64, 00:25:49.689 "state": "online", 00:25:49.689 "raid_level": "raid5f", 00:25:49.689 "superblock": false, 00:25:49.689 "num_base_bdevs": 4, 00:25:49.689 "num_base_bdevs_discovered": 4, 00:25:49.689 "num_base_bdevs_operational": 4, 00:25:49.689 "process": { 00:25:49.689 "type": "rebuild", 00:25:49.689 "target": "spare", 00:25:49.689 "progress": { 00:25:49.689 "blocks": 76800, 00:25:49.689 "percent": 39 00:25:49.689 } 00:25:49.689 }, 00:25:49.689 "base_bdevs_list": [ 00:25:49.689 { 00:25:49.689 "name": "spare", 00:25:49.689 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:49.689 "is_configured": true, 00:25:49.689 "data_offset": 0, 00:25:49.689 "data_size": 65536 00:25:49.689 }, 00:25:49.689 { 00:25:49.689 "name": "BaseBdev2", 00:25:49.689 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:49.689 "is_configured": true, 00:25:49.689 "data_offset": 0, 00:25:49.689 "data_size": 65536 00:25:49.689 }, 00:25:49.689 { 00:25:49.689 "name": "BaseBdev3", 00:25:49.689 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:49.689 "is_configured": true, 00:25:49.689 "data_offset": 0, 00:25:49.689 "data_size": 65536 00:25:49.689 }, 00:25:49.689 { 00:25:49.689 "name": "BaseBdev4", 00:25:49.689 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:49.689 "is_configured": true, 00:25:49.689 "data_offset": 0, 00:25:49.689 "data_size": 65536 00:25:49.689 } 00:25:49.689 ] 00:25:49.689 }' 00:25:49.689 13:49:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:49.689 13:49:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.690 13:49:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:49.690 13:49:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.690 13:49:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.627 13:49:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.887 13:49:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:50.887 "name": "raid_bdev1", 00:25:50.887 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:50.887 "strip_size_kb": 64, 00:25:50.887 "state": "online", 00:25:50.887 "raid_level": "raid5f", 00:25:50.887 "superblock": false, 00:25:50.887 "num_base_bdevs": 4, 00:25:50.887 "num_base_bdevs_discovered": 4, 00:25:50.887 "num_base_bdevs_operational": 4, 00:25:50.887 "process": { 00:25:50.887 "type": "rebuild", 00:25:50.887 "target": "spare", 00:25:50.887 "progress": { 00:25:50.887 "blocks": 101760, 00:25:50.887 "percent": 51 00:25:50.887 } 00:25:50.887 }, 00:25:50.887 "base_bdevs_list": [ 00:25:50.887 { 00:25:50.887 "name": "spare", 00:25:50.887 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:50.887 "is_configured": true, 00:25:50.887 "data_offset": 0, 00:25:50.887 "data_size": 65536 00:25:50.887 }, 00:25:50.887 { 00:25:50.887 "name": "BaseBdev2", 00:25:50.887 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:50.887 "is_configured": true, 00:25:50.887 "data_offset": 0, 00:25:50.887 "data_size": 65536 00:25:50.887 }, 00:25:50.887 { 00:25:50.887 "name": "BaseBdev3", 00:25:50.887 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:50.887 "is_configured": true, 00:25:50.887 "data_offset": 0, 00:25:50.887 "data_size": 65536 00:25:50.887 }, 00:25:50.887 { 00:25:50.887 "name": "BaseBdev4", 00:25:50.887 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:50.887 "is_configured": true, 00:25:50.887 "data_offset": 0, 00:25:50.887 "data_size": 65536 00:25:50.887 } 00:25:50.887 ] 00:25:50.887 }' 00:25:50.887 13:49:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:50.887 13:49:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.887 13:49:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:51.147 13:49:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.147 13:49:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.087 13:49:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.353 13:49:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:52.353 "name": "raid_bdev1", 00:25:52.353 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:52.353 "strip_size_kb": 64, 00:25:52.353 "state": "online", 00:25:52.353 "raid_level": "raid5f", 00:25:52.353 "superblock": false, 00:25:52.353 "num_base_bdevs": 4, 00:25:52.353 "num_base_bdevs_discovered": 4, 00:25:52.353 "num_base_bdevs_operational": 4, 00:25:52.353 "process": { 00:25:52.353 "type": "rebuild", 00:25:52.353 "target": "spare", 00:25:52.353 "progress": { 00:25:52.353 "blocks": 126720, 00:25:52.353 "percent": 64 00:25:52.353 } 00:25:52.353 }, 00:25:52.353 "base_bdevs_list": [ 00:25:52.353 { 00:25:52.353 "name": "spare", 00:25:52.353 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:52.353 "is_configured": true, 00:25:52.353 "data_offset": 0, 00:25:52.353 "data_size": 65536 00:25:52.353 }, 00:25:52.353 { 00:25:52.353 "name": "BaseBdev2", 00:25:52.353 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:52.353 "is_configured": true, 00:25:52.353 "data_offset": 0, 00:25:52.353 "data_size": 65536 00:25:52.353 }, 00:25:52.353 { 00:25:52.353 "name": "BaseBdev3", 00:25:52.354 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:52.354 "is_configured": true, 00:25:52.354 "data_offset": 0, 00:25:52.354 "data_size": 65536 00:25:52.354 }, 00:25:52.354 { 00:25:52.354 "name": "BaseBdev4", 00:25:52.354 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:52.354 "is_configured": true, 00:25:52.354 "data_offset": 0, 00:25:52.354 "data_size": 65536 00:25:52.354 } 00:25:52.354 ] 00:25:52.354 }' 00:25:52.354 13:49:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:52.354 13:49:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.354 13:49:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.354 13:49:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.354 13:49:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.304 13:49:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.564 13:49:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.564 "name": "raid_bdev1", 00:25:53.564 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:53.564 "strip_size_kb": 64, 00:25:53.564 "state": "online", 00:25:53.564 "raid_level": "raid5f", 00:25:53.564 "superblock": false, 00:25:53.564 "num_base_bdevs": 4, 00:25:53.564 "num_base_bdevs_discovered": 4, 00:25:53.564 "num_base_bdevs_operational": 4, 00:25:53.564 "process": { 00:25:53.564 "type": "rebuild", 00:25:53.564 "target": "spare", 00:25:53.564 "progress": { 00:25:53.564 "blocks": 151680, 00:25:53.564 "percent": 77 00:25:53.564 } 00:25:53.564 }, 00:25:53.564 "base_bdevs_list": [ 00:25:53.564 { 00:25:53.564 "name": "spare", 00:25:53.564 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:53.564 "is_configured": true, 00:25:53.564 "data_offset": 0, 00:25:53.564 "data_size": 65536 00:25:53.564 }, 00:25:53.564 { 00:25:53.564 "name": "BaseBdev2", 00:25:53.564 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:53.564 "is_configured": true, 00:25:53.564 "data_offset": 0, 00:25:53.565 "data_size": 65536 00:25:53.565 }, 00:25:53.565 { 00:25:53.565 "name": "BaseBdev3", 00:25:53.565 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:53.565 "is_configured": true, 00:25:53.565 "data_offset": 0, 00:25:53.565 "data_size": 65536 00:25:53.565 }, 00:25:53.565 { 00:25:53.565 "name": "BaseBdev4", 00:25:53.565 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:53.565 "is_configured": true, 00:25:53.565 "data_offset": 0, 00:25:53.565 "data_size": 65536 00:25:53.565 } 00:25:53.565 ] 00:25:53.565 }' 00:25:53.565 13:49:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.565 13:49:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.565 13:49:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.565 13:49:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.565 13:49:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.498 13:49:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.762 13:49:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:54.762 "name": "raid_bdev1", 00:25:54.762 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:54.762 "strip_size_kb": 64, 00:25:54.762 "state": "online", 00:25:54.762 "raid_level": "raid5f", 00:25:54.762 "superblock": false, 00:25:54.762 "num_base_bdevs": 4, 00:25:54.762 "num_base_bdevs_discovered": 4, 00:25:54.762 "num_base_bdevs_operational": 4, 00:25:54.762 "process": { 00:25:54.762 "type": "rebuild", 00:25:54.762 "target": "spare", 00:25:54.762 "progress": { 00:25:54.762 "blocks": 176640, 00:25:54.762 "percent": 89 00:25:54.762 } 00:25:54.762 }, 00:25:54.762 "base_bdevs_list": [ 00:25:54.762 { 00:25:54.762 "name": "spare", 00:25:54.762 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:54.762 "is_configured": true, 00:25:54.763 "data_offset": 0, 00:25:54.763 "data_size": 65536 00:25:54.763 }, 00:25:54.763 { 00:25:54.763 "name": "BaseBdev2", 00:25:54.763 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:54.763 "is_configured": true, 00:25:54.763 "data_offset": 0, 00:25:54.763 "data_size": 65536 00:25:54.763 }, 00:25:54.763 { 00:25:54.763 "name": "BaseBdev3", 00:25:54.763 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:54.763 "is_configured": true, 00:25:54.763 "data_offset": 0, 00:25:54.763 "data_size": 65536 00:25:54.763 }, 00:25:54.763 { 00:25:54.763 "name": "BaseBdev4", 00:25:54.763 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:54.763 "is_configured": true, 00:25:54.763 "data_offset": 0, 00:25:54.763 "data_size": 65536 00:25:54.763 } 00:25:54.763 ] 00:25:54.763 }' 00:25:54.763 13:49:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:54.763 13:49:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.763 13:49:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:55.022 13:49:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.022 13:49:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:55.958 [2024-07-10 13:49:35.113168] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:55.958 [2024-07-10 13:49:35.113344] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:55.958 [2024-07-10 13:49:35.113452] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.958 13:49:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.216 "name": "raid_bdev1", 00:25:56.216 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:56.216 "strip_size_kb": 64, 00:25:56.216 "state": "online", 00:25:56.216 "raid_level": "raid5f", 00:25:56.216 "superblock": false, 00:25:56.216 "num_base_bdevs": 4, 00:25:56.216 "num_base_bdevs_discovered": 4, 00:25:56.216 "num_base_bdevs_operational": 4, 00:25:56.216 "base_bdevs_list": [ 00:25:56.216 { 00:25:56.216 "name": "spare", 00:25:56.216 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:56.216 "is_configured": true, 00:25:56.216 "data_offset": 0, 00:25:56.216 "data_size": 65536 00:25:56.216 }, 00:25:56.216 { 00:25:56.216 "name": "BaseBdev2", 00:25:56.216 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:56.216 "is_configured": true, 00:25:56.216 "data_offset": 0, 00:25:56.216 "data_size": 65536 00:25:56.216 }, 00:25:56.216 { 00:25:56.216 "name": "BaseBdev3", 00:25:56.216 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:56.216 "is_configured": true, 00:25:56.216 "data_offset": 0, 00:25:56.216 "data_size": 65536 00:25:56.216 }, 00:25:56.216 { 00:25:56.216 "name": "BaseBdev4", 00:25:56.216 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:56.216 "is_configured": true, 00:25:56.216 "data_offset": 0, 00:25:56.216 "data_size": 65536 00:25:56.216 } 00:25:56.216 ] 00:25:56.216 }' 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@660 -- # break 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.216 13:49:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.486 "name": "raid_bdev1", 00:25:56.486 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:56.486 "strip_size_kb": 64, 00:25:56.486 "state": "online", 00:25:56.486 "raid_level": "raid5f", 00:25:56.486 "superblock": false, 00:25:56.486 "num_base_bdevs": 4, 00:25:56.486 "num_base_bdevs_discovered": 4, 00:25:56.486 "num_base_bdevs_operational": 4, 00:25:56.486 "base_bdevs_list": [ 00:25:56.486 { 00:25:56.486 "name": "spare", 00:25:56.486 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:56.486 "is_configured": true, 00:25:56.486 "data_offset": 0, 00:25:56.486 "data_size": 65536 00:25:56.486 }, 00:25:56.486 { 00:25:56.486 "name": "BaseBdev2", 00:25:56.486 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:56.486 "is_configured": true, 00:25:56.486 "data_offset": 0, 00:25:56.486 "data_size": 65536 00:25:56.486 }, 00:25:56.486 { 00:25:56.486 "name": "BaseBdev3", 00:25:56.486 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:56.486 "is_configured": true, 00:25:56.486 "data_offset": 0, 00:25:56.486 "data_size": 65536 00:25:56.486 }, 00:25:56.486 { 00:25:56.486 "name": "BaseBdev4", 00:25:56.486 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:56.486 "is_configured": true, 00:25:56.486 "data_offset": 0, 00:25:56.486 "data_size": 65536 00:25:56.486 } 00:25:56.486 ] 00:25:56.486 }' 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.486 13:49:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.761 13:49:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:56.761 "name": "raid_bdev1", 00:25:56.761 "uuid": "83ba5ded-dc95-46d9-8eed-0a9c326ad732", 00:25:56.761 "strip_size_kb": 64, 00:25:56.761 "state": "online", 00:25:56.761 "raid_level": "raid5f", 00:25:56.761 "superblock": false, 00:25:56.761 "num_base_bdevs": 4, 00:25:56.761 "num_base_bdevs_discovered": 4, 00:25:56.761 "num_base_bdevs_operational": 4, 00:25:56.761 "base_bdevs_list": [ 00:25:56.761 { 00:25:56.761 "name": "spare", 00:25:56.761 "uuid": "f6f58959-2fdb-564b-a1ea-544642429886", 00:25:56.761 "is_configured": true, 00:25:56.761 "data_offset": 0, 00:25:56.761 "data_size": 65536 00:25:56.761 }, 00:25:56.761 { 00:25:56.761 "name": "BaseBdev2", 00:25:56.761 "uuid": "b42d7900-c299-4f39-8272-95c84385af14", 00:25:56.761 "is_configured": true, 00:25:56.761 "data_offset": 0, 00:25:56.761 "data_size": 65536 00:25:56.761 }, 00:25:56.761 { 00:25:56.761 "name": "BaseBdev3", 00:25:56.761 "uuid": "0c6f7fc3-176b-48a3-a035-081b65df314f", 00:25:56.761 "is_configured": true, 00:25:56.761 "data_offset": 0, 00:25:56.761 "data_size": 65536 00:25:56.761 }, 00:25:56.762 { 00:25:56.762 "name": "BaseBdev4", 00:25:56.762 "uuid": "5bd338bd-c667-4d08-b83d-be295689ef54", 00:25:56.762 "is_configured": true, 00:25:56.762 "data_offset": 0, 00:25:56.762 "data_size": 65536 00:25:56.762 } 00:25:56.762 ] 00:25:56.762 }' 00:25:56.762 13:49:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:56.762 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:25:57.331 13:49:36 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:57.590 [2024-07-10 13:49:36.791131] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:57.590 [2024-07-10 13:49:36.791216] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:57.590 [2024-07-10 13:49:36.791317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.590 [2024-07-10 13:49:36.791425] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:57.590 [2024-07-10 13:49:36.791458] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:25:57.590 13:49:36 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.590 13:49:36 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:57.849 13:49:37 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:57.849 13:49:37 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:57.849 13:49:37 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@12 -- # local i 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:57.849 13:49:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:58.109 /dev/nbd0 00:25:58.109 13:49:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:58.109 13:49:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:58.109 13:49:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:58.109 13:49:37 -- common/autotest_common.sh@857 -- # local i 00:25:58.109 13:49:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:58.109 13:49:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:58.109 13:49:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:58.109 13:49:37 -- common/autotest_common.sh@861 -- # break 00:25:58.109 13:49:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:58.109 13:49:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:58.109 13:49:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.109 1+0 records in 00:25:58.109 1+0 records out 00:25:58.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442269 s, 9.3 MB/s 00:25:58.109 13:49:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.109 13:49:37 -- common/autotest_common.sh@874 -- # size=4096 00:25:58.109 13:49:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.109 13:49:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:58.109 13:49:37 -- common/autotest_common.sh@877 -- # return 0 00:25:58.109 13:49:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.109 13:49:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:58.109 13:49:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:58.109 /dev/nbd1 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:58.369 13:49:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:58.369 13:49:37 -- common/autotest_common.sh@857 -- # local i 00:25:58.369 13:49:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:58.369 13:49:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:58.369 13:49:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:58.369 13:49:37 -- common/autotest_common.sh@861 -- # break 00:25:58.369 13:49:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:58.369 13:49:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:58.369 13:49:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.369 1+0 records in 00:25:58.369 1+0 records out 00:25:58.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309069 s, 13.3 MB/s 00:25:58.369 13:49:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.369 13:49:37 -- common/autotest_common.sh@874 -- # size=4096 00:25:58.369 13:49:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.369 13:49:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:58.369 13:49:37 -- common/autotest_common.sh@877 -- # return 0 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:58.369 13:49:37 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:58.369 13:49:37 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@51 -- # local i 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:58.369 13:49:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:58.628 13:49:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:58.628 13:49:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:58.628 13:49:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:58.628 13:49:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:58.628 13:49:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:58.628 13:49:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:58.629 13:49:37 -- bdev/nbd_common.sh@41 -- # break 00:25:58.629 13:49:37 -- bdev/nbd_common.sh@45 -- # return 0 00:25:58.629 13:49:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:58.629 13:49:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@41 -- # break 00:25:58.888 13:49:38 -- bdev/nbd_common.sh@45 -- # return 0 00:25:58.888 13:49:38 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:58.888 13:49:38 -- bdev/bdev_raid.sh@709 -- # killprocess 134844 00:25:58.888 13:49:38 -- common/autotest_common.sh@926 -- # '[' -z 134844 ']' 00:25:58.888 13:49:38 -- common/autotest_common.sh@930 -- # kill -0 134844 00:25:58.888 13:49:38 -- common/autotest_common.sh@931 -- # uname 00:25:58.888 13:49:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:58.888 13:49:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134844 00:25:58.888 killing process with pid 134844 00:25:58.888 Received shutdown signal, test time was about 60.000000 seconds 00:25:58.888 00:25:58.888 Latency(us) 00:25:58.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.888 =================================================================================================================== 00:25:58.888 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:58.888 13:49:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:58.888 13:49:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:58.888 13:49:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134844' 00:25:58.888 13:49:38 -- common/autotest_common.sh@945 -- # kill 134844 00:25:58.888 13:49:38 -- common/autotest_common.sh@950 -- # wait 134844 00:25:58.888 [2024-07-10 13:49:38.133453] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.457 [2024-07-10 13:49:38.592110] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:00.837 ************************************ 00:26:00.837 END TEST raid5f_rebuild_test 00:26:00.837 ************************************ 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:00.837 00:26:00.837 real 0m23.742s 00:26:00.837 user 0m33.890s 00:26:00.837 sys 0m2.429s 00:26:00.837 13:49:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.837 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:26:00.837 13:49:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:00.837 13:49:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:00.837 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:26:00.837 ************************************ 00:26:00.837 START TEST raid5f_rebuild_test_sb 00:26:00.837 ************************************ 00:26:00.837 13:49:39 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=135501 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135501 /var/tmp/spdk-raid.sock 00:26:00.837 13:49:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:00.837 13:49:39 -- common/autotest_common.sh@819 -- # '[' -z 135501 ']' 00:26:00.837 13:49:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:00.837 13:49:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:00.837 13:49:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:00.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:00.837 13:49:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:00.837 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:26:00.837 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:00.837 Zero copy mechanism will not be used. 00:26:00.837 [2024-07-10 13:49:39.945137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:00.837 [2024-07-10 13:49:39.945265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135501 ] 00:26:00.837 [2024-07-10 13:49:40.103914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.113 [2024-07-10 13:49:40.285630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.415 [2024-07-10 13:49:40.484021] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.415 13:49:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:01.415 13:49:40 -- common/autotest_common.sh@852 -- # return 0 00:26:01.415 13:49:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:01.415 13:49:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:01.415 13:49:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:01.672 BaseBdev1_malloc 00:26:01.672 13:49:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:01.930 [2024-07-10 13:49:41.188094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:01.930 [2024-07-10 13:49:41.188180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.930 [2024-07-10 13:49:41.188205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:01.930 [2024-07-10 13:49:41.188243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.930 [2024-07-10 13:49:41.190292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.930 [2024-07-10 13:49:41.190339] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:01.930 BaseBdev1 00:26:01.930 13:49:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:01.930 13:49:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:01.930 13:49:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:02.188 BaseBdev2_malloc 00:26:02.188 13:49:41 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:02.447 [2024-07-10 13:49:41.652522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:02.447 [2024-07-10 13:49:41.652606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.447 [2024-07-10 13:49:41.652640] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:02.447 [2024-07-10 13:49:41.652678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.447 [2024-07-10 13:49:41.654734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.447 [2024-07-10 13:49:41.654781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:02.447 BaseBdev2 00:26:02.447 13:49:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:02.447 13:49:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:02.447 13:49:41 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:02.707 BaseBdev3_malloc 00:26:02.707 13:49:41 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:02.707 [2024-07-10 13:49:42.053250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:02.707 [2024-07-10 13:49:42.053327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.707 [2024-07-10 13:49:42.053375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:02.707 [2024-07-10 13:49:42.053410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.707 [2024-07-10 13:49:42.055568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.707 [2024-07-10 13:49:42.055619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:02.707 BaseBdev3 00:26:02.965 13:49:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:02.965 13:49:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:26:02.965 13:49:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:02.965 BaseBdev4_malloc 00:26:02.965 13:49:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:03.223 [2024-07-10 13:49:42.455562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:03.223 [2024-07-10 13:49:42.455643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.223 [2024-07-10 13:49:42.455670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:03.223 [2024-07-10 13:49:42.455700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.223 [2024-07-10 13:49:42.457697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.223 [2024-07-10 13:49:42.457747] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:03.223 BaseBdev4 00:26:03.223 13:49:42 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:03.481 spare_malloc 00:26:03.481 13:49:42 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:03.740 spare_delay 00:26:03.740 13:49:42 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:03.740 [2024-07-10 13:49:43.060950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:03.740 [2024-07-10 13:49:43.061033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.740 [2024-07-10 13:49:43.061075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:03.740 [2024-07-10 13:49:43.061106] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.740 [2024-07-10 13:49:43.063158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.740 [2024-07-10 13:49:43.063211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:03.740 spare 00:26:03.740 13:49:43 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:04.000 [2024-07-10 13:49:43.248710] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:04.000 [2024-07-10 13:49:43.250283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:04.000 [2024-07-10 13:49:43.250355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:04.000 [2024-07-10 13:49:43.250395] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:04.000 [2024-07-10 13:49:43.250570] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:26:04.000 [2024-07-10 13:49:43.250585] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:04.000 [2024-07-10 13:49:43.250710] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:04.000 [2024-07-10 13:49:43.257037] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:26:04.000 [2024-07-10 13:49:43.257060] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:26:04.000 [2024-07-10 13:49:43.257222] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.000 13:49:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.259 13:49:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:04.259 "name": "raid_bdev1", 00:26:04.259 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:04.259 "strip_size_kb": 64, 00:26:04.259 "state": "online", 00:26:04.259 "raid_level": "raid5f", 00:26:04.259 "superblock": true, 00:26:04.259 "num_base_bdevs": 4, 00:26:04.259 "num_base_bdevs_discovered": 4, 00:26:04.259 "num_base_bdevs_operational": 4, 00:26:04.259 "base_bdevs_list": [ 00:26:04.259 { 00:26:04.259 "name": "BaseBdev1", 00:26:04.259 "uuid": "5c85e389-0189-5fe3-bf95-7fad2b2a3f50", 00:26:04.259 "is_configured": true, 00:26:04.259 "data_offset": 2048, 00:26:04.259 "data_size": 63488 00:26:04.259 }, 00:26:04.259 { 00:26:04.259 "name": "BaseBdev2", 00:26:04.259 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:04.259 "is_configured": true, 00:26:04.259 "data_offset": 2048, 00:26:04.259 "data_size": 63488 00:26:04.259 }, 00:26:04.259 { 00:26:04.259 "name": "BaseBdev3", 00:26:04.259 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:04.259 "is_configured": true, 00:26:04.259 "data_offset": 2048, 00:26:04.259 "data_size": 63488 00:26:04.259 }, 00:26:04.259 { 00:26:04.259 "name": "BaseBdev4", 00:26:04.259 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:04.259 "is_configured": true, 00:26:04.259 "data_offset": 2048, 00:26:04.259 "data_size": 63488 00:26:04.259 } 00:26:04.259 ] 00:26:04.259 }' 00:26:04.259 13:49:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:04.259 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:26:04.826 13:49:44 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:04.826 13:49:44 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:05.085 [2024-07-10 13:49:44.235081] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:05.085 13:49:44 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@12 -- # local i 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:05.085 13:49:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:05.344 [2024-07-10 13:49:44.558427] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:05.344 /dev/nbd0 00:26:05.344 13:49:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:05.344 13:49:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:05.344 13:49:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:05.344 13:49:44 -- common/autotest_common.sh@857 -- # local i 00:26:05.344 13:49:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:05.344 13:49:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:05.344 13:49:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:05.344 13:49:44 -- common/autotest_common.sh@861 -- # break 00:26:05.344 13:49:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:05.344 13:49:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:05.344 13:49:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:05.344 1+0 records in 00:26:05.344 1+0 records out 00:26:05.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390224 s, 10.5 MB/s 00:26:05.344 13:49:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.344 13:49:44 -- common/autotest_common.sh@874 -- # size=4096 00:26:05.344 13:49:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.344 13:49:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:05.344 13:49:44 -- common/autotest_common.sh@877 -- # return 0 00:26:05.344 13:49:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:05.344 13:49:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:05.344 13:49:44 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:05.344 13:49:44 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:05.344 13:49:44 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:05.344 13:49:44 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:05.912 496+0 records in 00:26:05.912 496+0 records out 00:26:05.912 97517568 bytes (98 MB, 93 MiB) copied, 0.481484 s, 203 MB/s 00:26:05.912 13:49:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:05.912 13:49:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:05.912 13:49:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:05.912 13:49:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:05.912 13:49:45 -- bdev/nbd_common.sh@51 -- # local i 00:26:05.912 13:49:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:05.912 13:49:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:06.171 [2024-07-10 13:49:45.312365] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@41 -- # break 00:26:06.171 13:49:45 -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.171 13:49:45 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:06.430 [2024-07-10 13:49:45.592675] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.430 13:49:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.689 13:49:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:06.689 "name": "raid_bdev1", 00:26:06.689 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:06.689 "strip_size_kb": 64, 00:26:06.689 "state": "online", 00:26:06.689 "raid_level": "raid5f", 00:26:06.689 "superblock": true, 00:26:06.689 "num_base_bdevs": 4, 00:26:06.689 "num_base_bdevs_discovered": 3, 00:26:06.689 "num_base_bdevs_operational": 3, 00:26:06.689 "base_bdevs_list": [ 00:26:06.689 { 00:26:06.689 "name": null, 00:26:06.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.689 "is_configured": false, 00:26:06.689 "data_offset": 2048, 00:26:06.689 "data_size": 63488 00:26:06.689 }, 00:26:06.689 { 00:26:06.689 "name": "BaseBdev2", 00:26:06.689 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:06.689 "is_configured": true, 00:26:06.689 "data_offset": 2048, 00:26:06.689 "data_size": 63488 00:26:06.689 }, 00:26:06.689 { 00:26:06.689 "name": "BaseBdev3", 00:26:06.689 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:06.689 "is_configured": true, 00:26:06.689 "data_offset": 2048, 00:26:06.689 "data_size": 63488 00:26:06.689 }, 00:26:06.689 { 00:26:06.689 "name": "BaseBdev4", 00:26:06.689 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:06.689 "is_configured": true, 00:26:06.689 "data_offset": 2048, 00:26:06.689 "data_size": 63488 00:26:06.689 } 00:26:06.689 ] 00:26:06.689 }' 00:26:06.689 13:49:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:06.689 13:49:45 -- common/autotest_common.sh@10 -- # set +x 00:26:07.255 13:49:46 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:07.255 [2024-07-10 13:49:46.574967] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:07.255 [2024-07-10 13:49:46.575026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:07.255 [2024-07-10 13:49:46.590224] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c860 00:26:07.255 [2024-07-10 13:49:46.598762] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:07.255 13:49:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:08.631 "name": "raid_bdev1", 00:26:08.631 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:08.631 "strip_size_kb": 64, 00:26:08.631 "state": "online", 00:26:08.631 "raid_level": "raid5f", 00:26:08.631 "superblock": true, 00:26:08.631 "num_base_bdevs": 4, 00:26:08.631 "num_base_bdevs_discovered": 4, 00:26:08.631 "num_base_bdevs_operational": 4, 00:26:08.631 "process": { 00:26:08.631 "type": "rebuild", 00:26:08.631 "target": "spare", 00:26:08.631 "progress": { 00:26:08.631 "blocks": 21120, 00:26:08.631 "percent": 11 00:26:08.631 } 00:26:08.631 }, 00:26:08.631 "base_bdevs_list": [ 00:26:08.631 { 00:26:08.631 "name": "spare", 00:26:08.631 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:08.631 "is_configured": true, 00:26:08.631 "data_offset": 2048, 00:26:08.631 "data_size": 63488 00:26:08.631 }, 00:26:08.631 { 00:26:08.631 "name": "BaseBdev2", 00:26:08.631 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:08.631 "is_configured": true, 00:26:08.631 "data_offset": 2048, 00:26:08.631 "data_size": 63488 00:26:08.631 }, 00:26:08.631 { 00:26:08.631 "name": "BaseBdev3", 00:26:08.631 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:08.631 "is_configured": true, 00:26:08.631 "data_offset": 2048, 00:26:08.631 "data_size": 63488 00:26:08.631 }, 00:26:08.631 { 00:26:08.631 "name": "BaseBdev4", 00:26:08.631 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:08.631 "is_configured": true, 00:26:08.631 "data_offset": 2048, 00:26:08.631 "data_size": 63488 00:26:08.631 } 00:26:08.631 ] 00:26:08.631 }' 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:08.631 13:49:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:08.890 [2024-07-10 13:49:48.049397] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:08.890 [2024-07-10 13:49:48.107759] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:08.890 [2024-07-10 13:49:48.107821] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.890 13:49:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.148 13:49:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:09.148 "name": "raid_bdev1", 00:26:09.148 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:09.148 "strip_size_kb": 64, 00:26:09.148 "state": "online", 00:26:09.148 "raid_level": "raid5f", 00:26:09.148 "superblock": true, 00:26:09.148 "num_base_bdevs": 4, 00:26:09.148 "num_base_bdevs_discovered": 3, 00:26:09.148 "num_base_bdevs_operational": 3, 00:26:09.148 "base_bdevs_list": [ 00:26:09.148 { 00:26:09.148 "name": null, 00:26:09.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.148 "is_configured": false, 00:26:09.148 "data_offset": 2048, 00:26:09.148 "data_size": 63488 00:26:09.148 }, 00:26:09.148 { 00:26:09.148 "name": "BaseBdev2", 00:26:09.148 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:09.148 "is_configured": true, 00:26:09.148 "data_offset": 2048, 00:26:09.148 "data_size": 63488 00:26:09.148 }, 00:26:09.148 { 00:26:09.148 "name": "BaseBdev3", 00:26:09.148 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:09.148 "is_configured": true, 00:26:09.148 "data_offset": 2048, 00:26:09.148 "data_size": 63488 00:26:09.148 }, 00:26:09.148 { 00:26:09.148 "name": "BaseBdev4", 00:26:09.148 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:09.148 "is_configured": true, 00:26:09.148 "data_offset": 2048, 00:26:09.148 "data_size": 63488 00:26:09.148 } 00:26:09.148 ] 00:26:09.148 }' 00:26:09.148 13:49:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:09.148 13:49:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.713 13:49:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.971 13:49:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:09.971 "name": "raid_bdev1", 00:26:09.971 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:09.971 "strip_size_kb": 64, 00:26:09.971 "state": "online", 00:26:09.971 "raid_level": "raid5f", 00:26:09.971 "superblock": true, 00:26:09.971 "num_base_bdevs": 4, 00:26:09.971 "num_base_bdevs_discovered": 3, 00:26:09.971 "num_base_bdevs_operational": 3, 00:26:09.971 "base_bdevs_list": [ 00:26:09.971 { 00:26:09.971 "name": null, 00:26:09.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.971 "is_configured": false, 00:26:09.971 "data_offset": 2048, 00:26:09.971 "data_size": 63488 00:26:09.971 }, 00:26:09.971 { 00:26:09.971 "name": "BaseBdev2", 00:26:09.971 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:09.971 "is_configured": true, 00:26:09.971 "data_offset": 2048, 00:26:09.971 "data_size": 63488 00:26:09.971 }, 00:26:09.971 { 00:26:09.971 "name": "BaseBdev3", 00:26:09.971 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:09.971 "is_configured": true, 00:26:09.971 "data_offset": 2048, 00:26:09.971 "data_size": 63488 00:26:09.971 }, 00:26:09.971 { 00:26:09.971 "name": "BaseBdev4", 00:26:09.971 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:09.971 "is_configured": true, 00:26:09.971 "data_offset": 2048, 00:26:09.971 "data_size": 63488 00:26:09.971 } 00:26:09.971 ] 00:26:09.971 }' 00:26:09.971 13:49:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:09.971 13:49:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:09.971 13:49:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:09.971 13:49:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:09.971 13:49:49 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:10.229 [2024-07-10 13:49:49.436436] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:10.229 [2024-07-10 13:49:49.436501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:10.229 [2024-07-10 13:49:49.449654] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ca00 00:26:10.229 [2024-07-10 13:49:49.458093] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:10.229 13:49:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.163 13:49:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.422 13:49:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:11.422 "name": "raid_bdev1", 00:26:11.422 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:11.422 "strip_size_kb": 64, 00:26:11.422 "state": "online", 00:26:11.422 "raid_level": "raid5f", 00:26:11.422 "superblock": true, 00:26:11.422 "num_base_bdevs": 4, 00:26:11.422 "num_base_bdevs_discovered": 4, 00:26:11.422 "num_base_bdevs_operational": 4, 00:26:11.422 "process": { 00:26:11.422 "type": "rebuild", 00:26:11.422 "target": "spare", 00:26:11.422 "progress": { 00:26:11.422 "blocks": 21120, 00:26:11.422 "percent": 11 00:26:11.422 } 00:26:11.422 }, 00:26:11.422 "base_bdevs_list": [ 00:26:11.422 { 00:26:11.422 "name": "spare", 00:26:11.422 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:11.422 "is_configured": true, 00:26:11.422 "data_offset": 2048, 00:26:11.422 "data_size": 63488 00:26:11.422 }, 00:26:11.422 { 00:26:11.422 "name": "BaseBdev2", 00:26:11.422 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:11.422 "is_configured": true, 00:26:11.422 "data_offset": 2048, 00:26:11.422 "data_size": 63488 00:26:11.422 }, 00:26:11.422 { 00:26:11.422 "name": "BaseBdev3", 00:26:11.423 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:11.423 "is_configured": true, 00:26:11.423 "data_offset": 2048, 00:26:11.423 "data_size": 63488 00:26:11.423 }, 00:26:11.423 { 00:26:11.423 "name": "BaseBdev4", 00:26:11.423 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:11.423 "is_configured": true, 00:26:11.423 "data_offset": 2048, 00:26:11.423 "data_size": 63488 00:26:11.423 } 00:26:11.423 ] 00:26:11.423 }' 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:26:11.423 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@657 -- # local timeout=705 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.423 13:49:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.681 13:49:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:11.681 "name": "raid_bdev1", 00:26:11.681 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:11.681 "strip_size_kb": 64, 00:26:11.681 "state": "online", 00:26:11.681 "raid_level": "raid5f", 00:26:11.681 "superblock": true, 00:26:11.681 "num_base_bdevs": 4, 00:26:11.681 "num_base_bdevs_discovered": 4, 00:26:11.681 "num_base_bdevs_operational": 4, 00:26:11.681 "process": { 00:26:11.681 "type": "rebuild", 00:26:11.681 "target": "spare", 00:26:11.681 "progress": { 00:26:11.681 "blocks": 26880, 00:26:11.681 "percent": 14 00:26:11.681 } 00:26:11.681 }, 00:26:11.681 "base_bdevs_list": [ 00:26:11.681 { 00:26:11.681 "name": "spare", 00:26:11.681 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:11.681 "is_configured": true, 00:26:11.681 "data_offset": 2048, 00:26:11.681 "data_size": 63488 00:26:11.681 }, 00:26:11.681 { 00:26:11.681 "name": "BaseBdev2", 00:26:11.681 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:11.681 "is_configured": true, 00:26:11.681 "data_offset": 2048, 00:26:11.681 "data_size": 63488 00:26:11.681 }, 00:26:11.681 { 00:26:11.681 "name": "BaseBdev3", 00:26:11.681 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:11.681 "is_configured": true, 00:26:11.681 "data_offset": 2048, 00:26:11.681 "data_size": 63488 00:26:11.681 }, 00:26:11.681 { 00:26:11.681 "name": "BaseBdev4", 00:26:11.681 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:11.681 "is_configured": true, 00:26:11.681 "data_offset": 2048, 00:26:11.681 "data_size": 63488 00:26:11.681 } 00:26:11.681 ] 00:26:11.681 }' 00:26:11.681 13:49:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:11.682 13:49:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:11.682 13:49:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.940 13:49:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.940 13:49:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.879 13:49:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.141 13:49:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:13.141 "name": "raid_bdev1", 00:26:13.141 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:13.141 "strip_size_kb": 64, 00:26:13.141 "state": "online", 00:26:13.141 "raid_level": "raid5f", 00:26:13.141 "superblock": true, 00:26:13.141 "num_base_bdevs": 4, 00:26:13.141 "num_base_bdevs_discovered": 4, 00:26:13.141 "num_base_bdevs_operational": 4, 00:26:13.141 "process": { 00:26:13.141 "type": "rebuild", 00:26:13.141 "target": "spare", 00:26:13.141 "progress": { 00:26:13.141 "blocks": 51840, 00:26:13.141 "percent": 27 00:26:13.141 } 00:26:13.141 }, 00:26:13.141 "base_bdevs_list": [ 00:26:13.141 { 00:26:13.141 "name": "spare", 00:26:13.141 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:13.141 "is_configured": true, 00:26:13.141 "data_offset": 2048, 00:26:13.141 "data_size": 63488 00:26:13.141 }, 00:26:13.141 { 00:26:13.141 "name": "BaseBdev2", 00:26:13.141 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:13.141 "is_configured": true, 00:26:13.141 "data_offset": 2048, 00:26:13.141 "data_size": 63488 00:26:13.141 }, 00:26:13.141 { 00:26:13.141 "name": "BaseBdev3", 00:26:13.141 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:13.141 "is_configured": true, 00:26:13.141 "data_offset": 2048, 00:26:13.141 "data_size": 63488 00:26:13.141 }, 00:26:13.141 { 00:26:13.141 "name": "BaseBdev4", 00:26:13.141 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:13.141 "is_configured": true, 00:26:13.141 "data_offset": 2048, 00:26:13.141 "data_size": 63488 00:26:13.141 } 00:26:13.141 ] 00:26:13.141 }' 00:26:13.141 13:49:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:13.141 13:49:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:13.141 13:49:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:13.141 13:49:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:13.141 13:49:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.082 13:49:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.341 13:49:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:14.341 "name": "raid_bdev1", 00:26:14.341 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:14.341 "strip_size_kb": 64, 00:26:14.341 "state": "online", 00:26:14.341 "raid_level": "raid5f", 00:26:14.341 "superblock": true, 00:26:14.341 "num_base_bdevs": 4, 00:26:14.341 "num_base_bdevs_discovered": 4, 00:26:14.341 "num_base_bdevs_operational": 4, 00:26:14.341 "process": { 00:26:14.341 "type": "rebuild", 00:26:14.341 "target": "spare", 00:26:14.341 "progress": { 00:26:14.341 "blocks": 76800, 00:26:14.341 "percent": 40 00:26:14.341 } 00:26:14.341 }, 00:26:14.341 "base_bdevs_list": [ 00:26:14.341 { 00:26:14.341 "name": "spare", 00:26:14.341 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:14.341 "is_configured": true, 00:26:14.341 "data_offset": 2048, 00:26:14.341 "data_size": 63488 00:26:14.341 }, 00:26:14.341 { 00:26:14.341 "name": "BaseBdev2", 00:26:14.341 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:14.341 "is_configured": true, 00:26:14.341 "data_offset": 2048, 00:26:14.341 "data_size": 63488 00:26:14.341 }, 00:26:14.341 { 00:26:14.341 "name": "BaseBdev3", 00:26:14.341 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:14.341 "is_configured": true, 00:26:14.341 "data_offset": 2048, 00:26:14.341 "data_size": 63488 00:26:14.341 }, 00:26:14.341 { 00:26:14.341 "name": "BaseBdev4", 00:26:14.341 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:14.341 "is_configured": true, 00:26:14.341 "data_offset": 2048, 00:26:14.341 "data_size": 63488 00:26:14.341 } 00:26:14.341 ] 00:26:14.341 }' 00:26:14.341 13:49:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:14.341 13:49:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:14.341 13:49:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:14.341 13:49:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:14.341 13:49:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:15.719 "name": "raid_bdev1", 00:26:15.719 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:15.719 "strip_size_kb": 64, 00:26:15.719 "state": "online", 00:26:15.719 "raid_level": "raid5f", 00:26:15.719 "superblock": true, 00:26:15.719 "num_base_bdevs": 4, 00:26:15.719 "num_base_bdevs_discovered": 4, 00:26:15.719 "num_base_bdevs_operational": 4, 00:26:15.719 "process": { 00:26:15.719 "type": "rebuild", 00:26:15.719 "target": "spare", 00:26:15.719 "progress": { 00:26:15.719 "blocks": 101760, 00:26:15.719 "percent": 53 00:26:15.719 } 00:26:15.719 }, 00:26:15.719 "base_bdevs_list": [ 00:26:15.719 { 00:26:15.719 "name": "spare", 00:26:15.719 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:15.719 "is_configured": true, 00:26:15.719 "data_offset": 2048, 00:26:15.719 "data_size": 63488 00:26:15.719 }, 00:26:15.719 { 00:26:15.719 "name": "BaseBdev2", 00:26:15.719 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:15.719 "is_configured": true, 00:26:15.719 "data_offset": 2048, 00:26:15.719 "data_size": 63488 00:26:15.719 }, 00:26:15.719 { 00:26:15.719 "name": "BaseBdev3", 00:26:15.719 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:15.719 "is_configured": true, 00:26:15.719 "data_offset": 2048, 00:26:15.719 "data_size": 63488 00:26:15.719 }, 00:26:15.719 { 00:26:15.719 "name": "BaseBdev4", 00:26:15.719 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:15.719 "is_configured": true, 00:26:15.719 "data_offset": 2048, 00:26:15.719 "data_size": 63488 00:26:15.719 } 00:26:15.719 ] 00:26:15.719 }' 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:15.719 13:49:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.657 13:49:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.917 13:49:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:16.917 "name": "raid_bdev1", 00:26:16.917 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:16.917 "strip_size_kb": 64, 00:26:16.917 "state": "online", 00:26:16.917 "raid_level": "raid5f", 00:26:16.917 "superblock": true, 00:26:16.917 "num_base_bdevs": 4, 00:26:16.917 "num_base_bdevs_discovered": 4, 00:26:16.917 "num_base_bdevs_operational": 4, 00:26:16.917 "process": { 00:26:16.917 "type": "rebuild", 00:26:16.917 "target": "spare", 00:26:16.917 "progress": { 00:26:16.917 "blocks": 126720, 00:26:16.917 "percent": 66 00:26:16.917 } 00:26:16.917 }, 00:26:16.917 "base_bdevs_list": [ 00:26:16.917 { 00:26:16.917 "name": "spare", 00:26:16.917 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:16.917 "is_configured": true, 00:26:16.917 "data_offset": 2048, 00:26:16.917 "data_size": 63488 00:26:16.917 }, 00:26:16.917 { 00:26:16.917 "name": "BaseBdev2", 00:26:16.917 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:16.917 "is_configured": true, 00:26:16.917 "data_offset": 2048, 00:26:16.917 "data_size": 63488 00:26:16.917 }, 00:26:16.917 { 00:26:16.917 "name": "BaseBdev3", 00:26:16.917 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:16.917 "is_configured": true, 00:26:16.917 "data_offset": 2048, 00:26:16.917 "data_size": 63488 00:26:16.917 }, 00:26:16.917 { 00:26:16.917 "name": "BaseBdev4", 00:26:16.917 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:16.917 "is_configured": true, 00:26:16.917 "data_offset": 2048, 00:26:16.917 "data_size": 63488 00:26:16.917 } 00:26:16.917 ] 00:26:16.917 }' 00:26:16.917 13:49:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:16.917 13:49:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:16.917 13:49:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:16.917 13:49:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:16.917 13:49:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:18.354 "name": "raid_bdev1", 00:26:18.354 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:18.354 "strip_size_kb": 64, 00:26:18.354 "state": "online", 00:26:18.354 "raid_level": "raid5f", 00:26:18.354 "superblock": true, 00:26:18.354 "num_base_bdevs": 4, 00:26:18.354 "num_base_bdevs_discovered": 4, 00:26:18.354 "num_base_bdevs_operational": 4, 00:26:18.354 "process": { 00:26:18.354 "type": "rebuild", 00:26:18.354 "target": "spare", 00:26:18.354 "progress": { 00:26:18.354 "blocks": 151680, 00:26:18.354 "percent": 79 00:26:18.354 } 00:26:18.354 }, 00:26:18.354 "base_bdevs_list": [ 00:26:18.354 { 00:26:18.354 "name": "spare", 00:26:18.354 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:18.354 "is_configured": true, 00:26:18.354 "data_offset": 2048, 00:26:18.354 "data_size": 63488 00:26:18.354 }, 00:26:18.354 { 00:26:18.354 "name": "BaseBdev2", 00:26:18.354 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:18.354 "is_configured": true, 00:26:18.354 "data_offset": 2048, 00:26:18.354 "data_size": 63488 00:26:18.354 }, 00:26:18.354 { 00:26:18.354 "name": "BaseBdev3", 00:26:18.354 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:18.354 "is_configured": true, 00:26:18.354 "data_offset": 2048, 00:26:18.354 "data_size": 63488 00:26:18.354 }, 00:26:18.354 { 00:26:18.354 "name": "BaseBdev4", 00:26:18.354 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:18.354 "is_configured": true, 00:26:18.354 "data_offset": 2048, 00:26:18.354 "data_size": 63488 00:26:18.354 } 00:26:18.354 ] 00:26:18.354 }' 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:18.354 13:49:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.293 13:49:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.553 13:49:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:19.553 "name": "raid_bdev1", 00:26:19.553 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:19.553 "strip_size_kb": 64, 00:26:19.553 "state": "online", 00:26:19.553 "raid_level": "raid5f", 00:26:19.553 "superblock": true, 00:26:19.553 "num_base_bdevs": 4, 00:26:19.553 "num_base_bdevs_discovered": 4, 00:26:19.553 "num_base_bdevs_operational": 4, 00:26:19.553 "process": { 00:26:19.553 "type": "rebuild", 00:26:19.553 "target": "spare", 00:26:19.553 "progress": { 00:26:19.553 "blocks": 174720, 00:26:19.553 "percent": 91 00:26:19.553 } 00:26:19.553 }, 00:26:19.553 "base_bdevs_list": [ 00:26:19.553 { 00:26:19.553 "name": "spare", 00:26:19.553 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:19.553 "is_configured": true, 00:26:19.553 "data_offset": 2048, 00:26:19.553 "data_size": 63488 00:26:19.553 }, 00:26:19.553 { 00:26:19.553 "name": "BaseBdev2", 00:26:19.553 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:19.553 "is_configured": true, 00:26:19.553 "data_offset": 2048, 00:26:19.553 "data_size": 63488 00:26:19.553 }, 00:26:19.553 { 00:26:19.553 "name": "BaseBdev3", 00:26:19.553 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:19.553 "is_configured": true, 00:26:19.553 "data_offset": 2048, 00:26:19.553 "data_size": 63488 00:26:19.553 }, 00:26:19.553 { 00:26:19.553 "name": "BaseBdev4", 00:26:19.553 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:19.553 "is_configured": true, 00:26:19.553 "data_offset": 2048, 00:26:19.553 "data_size": 63488 00:26:19.553 } 00:26:19.553 ] 00:26:19.553 }' 00:26:19.553 13:49:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:19.553 13:49:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:19.553 13:49:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:19.553 13:49:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:19.553 13:49:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:20.490 [2024-07-10 13:49:59.521499] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:20.490 [2024-07-10 13:49:59.521609] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:20.490 [2024-07-10 13:49:59.521815] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.750 13:49:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.750 13:50:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:20.750 "name": "raid_bdev1", 00:26:20.750 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:20.750 "strip_size_kb": 64, 00:26:20.750 "state": "online", 00:26:20.750 "raid_level": "raid5f", 00:26:20.750 "superblock": true, 00:26:20.750 "num_base_bdevs": 4, 00:26:20.750 "num_base_bdevs_discovered": 4, 00:26:20.750 "num_base_bdevs_operational": 4, 00:26:20.750 "base_bdevs_list": [ 00:26:20.750 { 00:26:20.750 "name": "spare", 00:26:20.750 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:20.750 "is_configured": true, 00:26:20.750 "data_offset": 2048, 00:26:20.750 "data_size": 63488 00:26:20.750 }, 00:26:20.750 { 00:26:20.750 "name": "BaseBdev2", 00:26:20.750 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:20.750 "is_configured": true, 00:26:20.750 "data_offset": 2048, 00:26:20.750 "data_size": 63488 00:26:20.750 }, 00:26:20.750 { 00:26:20.750 "name": "BaseBdev3", 00:26:20.750 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:20.750 "is_configured": true, 00:26:20.750 "data_offset": 2048, 00:26:20.750 "data_size": 63488 00:26:20.750 }, 00:26:20.750 { 00:26:20.750 "name": "BaseBdev4", 00:26:20.750 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:20.750 "is_configured": true, 00:26:20.750 "data_offset": 2048, 00:26:20.750 "data_size": 63488 00:26:20.750 } 00:26:20.750 ] 00:26:20.750 }' 00:26:20.750 13:50:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:20.750 13:50:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:20.750 13:50:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@660 -- # break 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:21.009 "name": "raid_bdev1", 00:26:21.009 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:21.009 "strip_size_kb": 64, 00:26:21.009 "state": "online", 00:26:21.009 "raid_level": "raid5f", 00:26:21.009 "superblock": true, 00:26:21.009 "num_base_bdevs": 4, 00:26:21.009 "num_base_bdevs_discovered": 4, 00:26:21.009 "num_base_bdevs_operational": 4, 00:26:21.009 "base_bdevs_list": [ 00:26:21.009 { 00:26:21.009 "name": "spare", 00:26:21.009 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:21.009 "is_configured": true, 00:26:21.009 "data_offset": 2048, 00:26:21.009 "data_size": 63488 00:26:21.009 }, 00:26:21.009 { 00:26:21.009 "name": "BaseBdev2", 00:26:21.009 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:21.009 "is_configured": true, 00:26:21.009 "data_offset": 2048, 00:26:21.009 "data_size": 63488 00:26:21.009 }, 00:26:21.009 { 00:26:21.009 "name": "BaseBdev3", 00:26:21.009 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:21.009 "is_configured": true, 00:26:21.009 "data_offset": 2048, 00:26:21.009 "data_size": 63488 00:26:21.009 }, 00:26:21.009 { 00:26:21.009 "name": "BaseBdev4", 00:26:21.009 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:21.009 "is_configured": true, 00:26:21.009 "data_offset": 2048, 00:26:21.009 "data_size": 63488 00:26:21.009 } 00:26:21.009 ] 00:26:21.009 }' 00:26:21.009 13:50:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:21.268 13:50:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.269 13:50:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.269 13:50:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:21.269 "name": "raid_bdev1", 00:26:21.269 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:21.269 "strip_size_kb": 64, 00:26:21.269 "state": "online", 00:26:21.269 "raid_level": "raid5f", 00:26:21.269 "superblock": true, 00:26:21.269 "num_base_bdevs": 4, 00:26:21.269 "num_base_bdevs_discovered": 4, 00:26:21.269 "num_base_bdevs_operational": 4, 00:26:21.269 "base_bdevs_list": [ 00:26:21.269 { 00:26:21.269 "name": "spare", 00:26:21.269 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:21.269 "is_configured": true, 00:26:21.269 "data_offset": 2048, 00:26:21.269 "data_size": 63488 00:26:21.269 }, 00:26:21.269 { 00:26:21.269 "name": "BaseBdev2", 00:26:21.269 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:21.269 "is_configured": true, 00:26:21.269 "data_offset": 2048, 00:26:21.269 "data_size": 63488 00:26:21.269 }, 00:26:21.269 { 00:26:21.269 "name": "BaseBdev3", 00:26:21.269 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:21.269 "is_configured": true, 00:26:21.269 "data_offset": 2048, 00:26:21.269 "data_size": 63488 00:26:21.269 }, 00:26:21.269 { 00:26:21.269 "name": "BaseBdev4", 00:26:21.269 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:21.269 "is_configured": true, 00:26:21.269 "data_offset": 2048, 00:26:21.269 "data_size": 63488 00:26:21.269 } 00:26:21.269 ] 00:26:21.269 }' 00:26:21.269 13:50:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:21.269 13:50:00 -- common/autotest_common.sh@10 -- # set +x 00:26:22.207 13:50:01 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:22.207 [2024-07-10 13:50:01.372709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.207 [2024-07-10 13:50:01.372766] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.207 [2024-07-10 13:50:01.372849] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.207 [2024-07-10 13:50:01.372946] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.207 [2024-07-10 13:50:01.372955] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:26:22.207 13:50:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.207 13:50:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:22.467 13:50:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:22.467 13:50:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:22.467 13:50:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@12 -- # local i 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:22.467 /dev/nbd0 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:22.467 13:50:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:22.467 13:50:01 -- common/autotest_common.sh@857 -- # local i 00:26:22.467 13:50:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:22.467 13:50:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:22.467 13:50:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:22.467 13:50:01 -- common/autotest_common.sh@861 -- # break 00:26:22.467 13:50:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:22.467 13:50:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:22.467 13:50:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.467 1+0 records in 00:26:22.467 1+0 records out 00:26:22.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373572 s, 11.0 MB/s 00:26:22.467 13:50:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.467 13:50:01 -- common/autotest_common.sh@874 -- # size=4096 00:26:22.467 13:50:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.467 13:50:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:22.467 13:50:01 -- common/autotest_common.sh@877 -- # return 0 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:22.467 13:50:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:22.727 /dev/nbd1 00:26:22.727 13:50:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:22.727 13:50:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:22.727 13:50:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:22.727 13:50:02 -- common/autotest_common.sh@857 -- # local i 00:26:22.727 13:50:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:22.727 13:50:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:22.727 13:50:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:22.727 13:50:02 -- common/autotest_common.sh@861 -- # break 00:26:22.727 13:50:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:22.727 13:50:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:22.727 13:50:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.727 1+0 records in 00:26:22.727 1+0 records out 00:26:22.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477241 s, 8.6 MB/s 00:26:22.727 13:50:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.727 13:50:02 -- common/autotest_common.sh@874 -- # size=4096 00:26:22.727 13:50:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.727 13:50:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:22.727 13:50:02 -- common/autotest_common.sh@877 -- # return 0 00:26:22.727 13:50:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.727 13:50:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:22.727 13:50:02 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:22.987 13:50:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:22.987 13:50:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:22.987 13:50:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:22.987 13:50:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:22.987 13:50:02 -- bdev/nbd_common.sh@51 -- # local i 00:26:22.987 13:50:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:22.987 13:50:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@41 -- # break 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.247 13:50:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@41 -- # break 00:26:23.506 13:50:02 -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.506 13:50:02 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:23.506 13:50:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:23.506 13:50:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:23.506 13:50:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:23.782 13:50:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:24.047 [2024-07-10 13:50:03.199716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:24.047 [2024-07-10 13:50:03.199805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.047 [2024-07-10 13:50:03.199838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:24.047 [2024-07-10 13:50:03.199853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.047 [2024-07-10 13:50:03.201796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.047 [2024-07-10 13:50:03.201869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:24.047 [2024-07-10 13:50:03.201981] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:24.047 [2024-07-10 13:50:03.202039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.047 BaseBdev1 00:26:24.047 13:50:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:24.047 13:50:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:24.047 13:50:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:24.306 13:50:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:24.306 [2024-07-10 13:50:03.580227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:24.306 [2024-07-10 13:50:03.580338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.306 [2024-07-10 13:50:03.580376] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:24.306 [2024-07-10 13:50:03.580391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.306 [2024-07-10 13:50:03.580810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.306 [2024-07-10 13:50:03.580856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:24.306 [2024-07-10 13:50:03.580949] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:24.306 [2024-07-10 13:50:03.580965] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:24.306 [2024-07-10 13:50:03.580971] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.306 [2024-07-10 13:50:03.580990] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:26:24.306 [2024-07-10 13:50:03.581063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:24.306 BaseBdev2 00:26:24.306 13:50:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:24.306 13:50:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:24.306 13:50:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:24.580 13:50:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:24.856 [2024-07-10 13:50:03.967604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:24.856 [2024-07-10 13:50:03.967710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.856 [2024-07-10 13:50:03.967739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:24.856 [2024-07-10 13:50:03.967761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.856 [2024-07-10 13:50:03.968296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.856 [2024-07-10 13:50:03.968356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:24.856 [2024-07-10 13:50:03.968489] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:24.856 [2024-07-10 13:50:03.968523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:24.856 BaseBdev3 00:26:24.856 13:50:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:24.856 13:50:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:24.856 13:50:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:24.856 13:50:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:25.116 [2024-07-10 13:50:04.338951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:25.116 [2024-07-10 13:50:04.339058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.116 [2024-07-10 13:50:04.339089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:25.116 [2024-07-10 13:50:04.339109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.116 [2024-07-10 13:50:04.339570] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.116 [2024-07-10 13:50:04.339628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:25.116 [2024-07-10 13:50:04.339734] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:25.116 [2024-07-10 13:50:04.339767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:25.116 BaseBdev4 00:26:25.116 13:50:04 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:25.376 13:50:04 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:25.635 [2024-07-10 13:50:04.741787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:25.635 [2024-07-10 13:50:04.741876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:25.635 [2024-07-10 13:50:04.741906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:25.635 [2024-07-10 13:50:04.741926] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:25.636 [2024-07-10 13:50:04.742399] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:25.636 [2024-07-10 13:50:04.742453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:25.636 [2024-07-10 13:50:04.742574] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:25.636 [2024-07-10 13:50:04.742603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.636 spare 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.636 [2024-07-10 13:50:04.842539] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:26:25.636 [2024-07-10 13:50:04.842587] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:25.636 [2024-07-10 13:50:04.842797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d7b0 00:26:25.636 [2024-07-10 13:50:04.850454] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:26:25.636 [2024-07-10 13:50:04.850490] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:26:25.636 [2024-07-10 13:50:04.850696] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:25.636 "name": "raid_bdev1", 00:26:25.636 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:25.636 "strip_size_kb": 64, 00:26:25.636 "state": "online", 00:26:25.636 "raid_level": "raid5f", 00:26:25.636 "superblock": true, 00:26:25.636 "num_base_bdevs": 4, 00:26:25.636 "num_base_bdevs_discovered": 4, 00:26:25.636 "num_base_bdevs_operational": 4, 00:26:25.636 "base_bdevs_list": [ 00:26:25.636 { 00:26:25.636 "name": "spare", 00:26:25.636 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:25.636 "is_configured": true, 00:26:25.636 "data_offset": 2048, 00:26:25.636 "data_size": 63488 00:26:25.636 }, 00:26:25.636 { 00:26:25.636 "name": "BaseBdev2", 00:26:25.636 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:25.636 "is_configured": true, 00:26:25.636 "data_offset": 2048, 00:26:25.636 "data_size": 63488 00:26:25.636 }, 00:26:25.636 { 00:26:25.636 "name": "BaseBdev3", 00:26:25.636 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:25.636 "is_configured": true, 00:26:25.636 "data_offset": 2048, 00:26:25.636 "data_size": 63488 00:26:25.636 }, 00:26:25.636 { 00:26:25.636 "name": "BaseBdev4", 00:26:25.636 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:25.636 "is_configured": true, 00:26:25.636 "data_offset": 2048, 00:26:25.636 "data_size": 63488 00:26:25.636 } 00:26:25.636 ] 00:26:25.636 }' 00:26:25.636 13:50:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:25.636 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:26:26.573 13:50:05 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:26.573 13:50:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:26.573 13:50:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:26.574 "name": "raid_bdev1", 00:26:26.574 "uuid": "187e8858-8d2b-4592-845f-604a78138de1", 00:26:26.574 "strip_size_kb": 64, 00:26:26.574 "state": "online", 00:26:26.574 "raid_level": "raid5f", 00:26:26.574 "superblock": true, 00:26:26.574 "num_base_bdevs": 4, 00:26:26.574 "num_base_bdevs_discovered": 4, 00:26:26.574 "num_base_bdevs_operational": 4, 00:26:26.574 "base_bdevs_list": [ 00:26:26.574 { 00:26:26.574 "name": "spare", 00:26:26.574 "uuid": "fefd9bd8-7870-5a2b-a1db-3d43539d23ec", 00:26:26.574 "is_configured": true, 00:26:26.574 "data_offset": 2048, 00:26:26.574 "data_size": 63488 00:26:26.574 }, 00:26:26.574 { 00:26:26.574 "name": "BaseBdev2", 00:26:26.574 "uuid": "6e0cac33-71a9-55a2-8921-0841daa575dd", 00:26:26.574 "is_configured": true, 00:26:26.574 "data_offset": 2048, 00:26:26.574 "data_size": 63488 00:26:26.574 }, 00:26:26.574 { 00:26:26.574 "name": "BaseBdev3", 00:26:26.574 "uuid": "e36f7dd8-341d-5b2a-abbd-94cf3a533fd0", 00:26:26.574 "is_configured": true, 00:26:26.574 "data_offset": 2048, 00:26:26.574 "data_size": 63488 00:26:26.574 }, 00:26:26.574 { 00:26:26.574 "name": "BaseBdev4", 00:26:26.574 "uuid": "1986cc4d-0b25-5542-95e7-4564f1ce35c9", 00:26:26.574 "is_configured": true, 00:26:26.574 "data_offset": 2048, 00:26:26.574 "data_size": 63488 00:26:26.574 } 00:26:26.574 ] 00:26:26.574 }' 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.574 13:50:05 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:26.833 13:50:06 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:26.833 13:50:06 -- bdev/bdev_raid.sh@709 -- # killprocess 135501 00:26:26.833 13:50:06 -- common/autotest_common.sh@926 -- # '[' -z 135501 ']' 00:26:26.833 13:50:06 -- common/autotest_common.sh@930 -- # kill -0 135501 00:26:26.833 13:50:06 -- common/autotest_common.sh@931 -- # uname 00:26:26.833 13:50:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:26.833 13:50:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135501 00:26:26.833 killing process with pid 135501 00:26:26.833 Received shutdown signal, test time was about 60.000000 seconds 00:26:26.833 00:26:26.833 Latency(us) 00:26:26.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.833 =================================================================================================================== 00:26:26.833 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:26.833 13:50:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:26.833 13:50:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:26.833 13:50:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135501' 00:26:26.833 13:50:06 -- common/autotest_common.sh@945 -- # kill 135501 00:26:26.833 13:50:06 -- common/autotest_common.sh@950 -- # wait 135501 00:26:26.833 [2024-07-10 13:50:06.067399] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:26.833 [2024-07-10 13:50:06.067477] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:26.833 [2024-07-10 13:50:06.067557] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:26.833 [2024-07-10 13:50:06.067570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:26:27.402 [2024-07-10 13:50:06.520822] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:28.782 13:50:07 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:28.782 00:26:28.782 real 0m27.852s 00:26:28.782 user 0m41.274s 00:26:28.782 sys 0m3.179s 00:26:28.782 13:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.782 ************************************ 00:26:28.782 END TEST raid5f_rebuild_test_sb 00:26:28.782 ************************************ 00:26:28.782 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:28.782 13:50:07 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:28.782 00:26:28.782 real 11m32.433s 00:26:28.782 user 18m46.474s 00:26:28.782 sys 1m25.925s 00:26:28.782 13:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.782 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:28.782 ************************************ 00:26:28.782 END TEST bdev_raid 00:26:28.782 ************************************ 00:26:28.782 13:50:07 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:28.782 13:50:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:28.782 13:50:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.782 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:28.782 ************************************ 00:26:28.782 START TEST bdevperf_config 00:26:28.782 ************************************ 00:26:28.782 13:50:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:28.782 * Looking for test storage... 00:26:28.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:28.782 13:50:07 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:28.782 13:50:07 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:28.782 13:50:07 -- bdevperf/common.sh@9 -- # local rw=read 00:26:28.782 13:50:07 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:28.782 13:50:07 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:28.782 13:50:07 -- bdevperf/common.sh@13 -- # cat 00:26:28.782 13:50:07 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:28.782 13:50:07 -- bdevperf/common.sh@19 -- # echo 00:26:28.782 00:26:28.782 13:50:07 -- bdevperf/common.sh@20 -- # cat 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:28.782 13:50:07 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:28.782 13:50:07 -- bdevperf/common.sh@9 -- # local rw= 00:26:28.782 13:50:07 -- bdevperf/common.sh@10 -- # local filename= 00:26:28.782 13:50:07 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:28.782 13:50:07 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:28.782 13:50:07 -- bdevperf/common.sh@19 -- # echo 00:26:28.782 00:26:28.782 13:50:07 -- bdevperf/common.sh@20 -- # cat 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:28.782 13:50:07 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:28.782 13:50:07 -- bdevperf/common.sh@9 -- # local rw= 00:26:28.782 13:50:07 -- bdevperf/common.sh@10 -- # local filename= 00:26:28.782 13:50:07 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:28.782 13:50:07 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:28.782 13:50:07 -- bdevperf/common.sh@19 -- # echo 00:26:28.782 00:26:28.782 13:50:07 -- bdevperf/common.sh@20 -- # cat 00:26:28.782 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:28.782 13:50:07 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:28.782 13:50:07 -- bdevperf/common.sh@9 -- # local rw= 00:26:28.782 13:50:07 -- bdevperf/common.sh@10 -- # local filename= 00:26:28.782 13:50:07 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:28.782 13:50:07 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:28.782 13:50:07 -- bdevperf/common.sh@19 -- # echo 00:26:28.782 13:50:07 -- bdevperf/common.sh@20 -- # cat 00:26:28.782 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:28.782 13:50:07 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:28.782 13:50:07 -- bdevperf/common.sh@9 -- # local rw= 00:26:28.782 13:50:07 -- bdevperf/common.sh@10 -- # local filename= 00:26:28.782 13:50:07 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:28.782 13:50:07 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:28.782 13:50:07 -- bdevperf/common.sh@19 -- # echo 00:26:28.782 13:50:07 -- bdevperf/common.sh@20 -- # cat 00:26:28.782 13:50:07 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:34.057 13:50:12 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-10 13:50:08.048299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:34.057 [2024-07-10 13:50:08.048461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136318 ] 00:26:34.057 Using job config with 4 jobs 00:26:34.057 [2024-07-10 13:50:08.208193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.057 [2024-07-10 13:50:08.420624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.057 cpumask for '\''job0'\'' is too big 00:26:34.057 cpumask for '\''job1'\'' is too big 00:26:34.057 cpumask for '\''job2'\'' is too big 00:26:34.057 cpumask for '\''job3'\'' is too big 00:26:34.057 Running I/O for 2 seconds... 00:26:34.057 00:26:34.057 Latency(us) 00:26:34.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.01 34674.80 33.86 0.00 0.00 7377.42 1259.21 11161.15 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.01 34686.02 33.87 0.00 0.00 7364.17 1287.83 9901.95 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.02 34661.87 33.85 0.00 0.00 7357.26 1294.98 8585.50 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.02 34639.36 33.83 0.00 0.00 7350.67 1273.52 8585.50 00:26:34.057 =================================================================================================================== 00:26:34.057 Total : 138662.06 135.41 0.00 0.00 7362.37 1259.21 11161.15' 00:26:34.057 13:50:12 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-10 13:50:08.048299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:34.057 [2024-07-10 13:50:08.048461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136318 ] 00:26:34.057 Using job config with 4 jobs 00:26:34.057 [2024-07-10 13:50:08.208193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.057 [2024-07-10 13:50:08.420624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.057 cpumask for '\''job0'\'' is too big 00:26:34.057 cpumask for '\''job1'\'' is too big 00:26:34.057 cpumask for '\''job2'\'' is too big 00:26:34.057 cpumask for '\''job3'\'' is too big 00:26:34.057 Running I/O for 2 seconds... 00:26:34.057 00:26:34.057 Latency(us) 00:26:34.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.01 34674.80 33.86 0.00 0.00 7377.42 1259.21 11161.15 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.01 34686.02 33.87 0.00 0.00 7364.17 1287.83 9901.95 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.02 34661.87 33.85 0.00 0.00 7357.26 1294.98 8585.50 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.02 34639.36 33.83 0.00 0.00 7350.67 1273.52 8585.50 00:26:34.057 =================================================================================================================== 00:26:34.057 Total : 138662.06 135.41 0.00 0.00 7362.37 1259.21 11161.15' 00:26:34.057 13:50:12 -- bdevperf/common.sh@32 -- # echo '[2024-07-10 13:50:08.048299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:34.057 [2024-07-10 13:50:08.048461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136318 ] 00:26:34.057 Using job config with 4 jobs 00:26:34.057 [2024-07-10 13:50:08.208193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.057 [2024-07-10 13:50:08.420624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.057 cpumask for '\''job0'\'' is too big 00:26:34.057 cpumask for '\''job1'\'' is too big 00:26:34.057 cpumask for '\''job2'\'' is too big 00:26:34.057 cpumask for '\''job3'\'' is too big 00:26:34.057 Running I/O for 2 seconds... 00:26:34.057 00:26:34.057 Latency(us) 00:26:34.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.01 34674.80 33.86 0.00 0.00 7377.42 1259.21 11161.15 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.01 34686.02 33.87 0.00 0.00 7364.17 1287.83 9901.95 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.02 34661.87 33.85 0.00 0.00 7357.26 1294.98 8585.50 00:26:34.057 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:34.057 Malloc0 : 2.02 34639.36 33.83 0.00 0.00 7350.67 1273.52 8585.50 00:26:34.057 =================================================================================================================== 00:26:34.057 Total : 138662.06 135.41 0.00 0.00 7362.37 1259.21 11161.15' 00:26:34.057 13:50:12 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:34.057 13:50:12 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:34.057 13:50:12 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:34.057 13:50:12 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:34.057 [2024-07-10 13:50:12.442119] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:34.057 [2024-07-10 13:50:12.442283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136376 ] 00:26:34.057 [2024-07-10 13:50:12.597581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.057 [2024-07-10 13:50:12.814398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.057 cpumask for 'job0' is too big 00:26:34.057 cpumask for 'job1' is too big 00:26:34.057 cpumask for 'job2' is too big 00:26:34.057 cpumask for 'job3' is too big 00:26:38.259 13:50:16 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:38.259 Running I/O for 2 seconds... 00:26:38.259 00:26:38.259 Latency(us) 00:26:38.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.259 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:38.259 Malloc0 : 2.01 34010.18 33.21 0.00 0.00 7521.14 1380.83 12076.94 00:26:38.259 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:38.259 Malloc0 : 2.02 34018.94 33.22 0.00 0.00 7506.67 1323.60 10588.79 00:26:38.259 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:38.259 Malloc0 : 2.02 33997.77 33.20 0.00 0.00 7498.39 1366.53 9043.40 00:26:38.259 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:38.259 Malloc0 : 2.02 33975.47 33.18 0.00 0.00 7490.62 1287.83 9157.87 00:26:38.259 =================================================================================================================== 00:26:38.259 Total : 136002.36 132.81 0.00 0.00 7504.19 1287.83 12076.94' 00:26:38.259 13:50:16 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:38.259 13:50:16 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:38.259 00:26:38.259 13:50:16 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:38.259 13:50:16 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:38.259 13:50:16 -- bdevperf/common.sh@9 -- # local rw=write 00:26:38.259 13:50:16 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:38.259 13:50:16 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:38.259 13:50:16 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:38.259 13:50:16 -- bdevperf/common.sh@19 -- # echo 00:26:38.259 13:50:16 -- bdevperf/common.sh@20 -- # cat 00:26:38.259 00:26:38.259 13:50:16 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:38.259 13:50:16 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:38.259 13:50:16 -- bdevperf/common.sh@9 -- # local rw=write 00:26:38.259 13:50:16 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:38.259 13:50:16 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:38.259 13:50:16 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:38.259 13:50:16 -- bdevperf/common.sh@19 -- # echo 00:26:38.259 13:50:16 -- bdevperf/common.sh@20 -- # cat 00:26:38.259 13:50:16 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:38.259 13:50:16 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:38.259 13:50:16 -- bdevperf/common.sh@9 -- # local rw=write 00:26:38.259 13:50:16 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:38.259 13:50:16 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:38.259 13:50:16 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:38.259 13:50:16 -- bdevperf/common.sh@19 -- # echo 00:26:38.259 00:26:38.259 13:50:16 -- bdevperf/common.sh@20 -- # cat 00:26:38.259 13:50:16 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:42.455 13:50:21 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-10 13:50:16.871603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:42.455 [2024-07-10 13:50:16.871730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136427 ] 00:26:42.455 Using job config with 3 jobs 00:26:42.455 [2024-07-10 13:50:17.028817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.455 [2024-07-10 13:50:17.242612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.455 cpumask for '\''job0'\'' is too big 00:26:42.455 cpumask for '\''job1'\'' is too big 00:26:42.455 cpumask for '\''job2'\'' is too big 00:26:42.455 Running I/O for 2 seconds... 00:26:42.455 00:26:42.455 Latency(us) 00:26:42.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45519.63 44.45 0.00 0.00 5618.30 1545.39 9959.18 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45527.28 44.46 0.00 0.00 5606.72 1638.40 8299.32 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45496.11 44.43 0.00 0.00 5599.46 1538.24 6954.26 00:26:42.455 =================================================================================================================== 00:26:42.455 Total : 136543.02 133.34 0.00 0.00 5608.15 1538.24 9959.18' 00:26:42.455 13:50:21 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-10 13:50:16.871603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:42.455 [2024-07-10 13:50:16.871730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136427 ] 00:26:42.455 Using job config with 3 jobs 00:26:42.455 [2024-07-10 13:50:17.028817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.455 [2024-07-10 13:50:17.242612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.455 cpumask for '\''job0'\'' is too big 00:26:42.455 cpumask for '\''job1'\'' is too big 00:26:42.455 cpumask for '\''job2'\'' is too big 00:26:42.455 Running I/O for 2 seconds... 00:26:42.455 00:26:42.455 Latency(us) 00:26:42.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45519.63 44.45 0.00 0.00 5618.30 1545.39 9959.18 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45527.28 44.46 0.00 0.00 5606.72 1638.40 8299.32 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45496.11 44.43 0.00 0.00 5599.46 1538.24 6954.26 00:26:42.455 =================================================================================================================== 00:26:42.455 Total : 136543.02 133.34 0.00 0.00 5608.15 1538.24 9959.18' 00:26:42.455 13:50:21 -- bdevperf/common.sh@32 -- # echo '[2024-07-10 13:50:16.871603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:42.455 [2024-07-10 13:50:16.871730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136427 ] 00:26:42.455 Using job config with 3 jobs 00:26:42.455 [2024-07-10 13:50:17.028817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.455 [2024-07-10 13:50:17.242612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.455 cpumask for '\''job0'\'' is too big 00:26:42.455 cpumask for '\''job1'\'' is too big 00:26:42.455 cpumask for '\''job2'\'' is too big 00:26:42.455 Running I/O for 2 seconds... 00:26:42.455 00:26:42.455 Latency(us) 00:26:42.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45519.63 44.45 0.00 0.00 5618.30 1545.39 9959.18 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45527.28 44.46 0.00 0.00 5606.72 1638.40 8299.32 00:26:42.455 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:42.455 Malloc0 : 2.01 45496.11 44.43 0.00 0.00 5599.46 1538.24 6954.26 00:26:42.455 =================================================================================================================== 00:26:42.455 Total : 136543.02 133.34 0.00 0.00 5608.15 1538.24 9959.18' 00:26:42.455 13:50:21 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:42.455 13:50:21 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:42.455 13:50:21 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:42.455 13:50:21 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:42.455 13:50:21 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:42.455 13:50:21 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:42.455 13:50:21 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:42.455 13:50:21 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:42.455 13:50:21 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:42.455 13:50:21 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:42.455 13:50:21 -- bdevperf/common.sh@13 -- # cat 00:26:42.455 00:26:42.455 13:50:21 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:42.455 13:50:21 -- bdevperf/common.sh@19 -- # echo 00:26:42.455 13:50:21 -- bdevperf/common.sh@20 -- # cat 00:26:42.455 13:50:21 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:42.455 13:50:21 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:42.455 13:50:21 -- bdevperf/common.sh@9 -- # local rw= 00:26:42.455 13:50:21 -- bdevperf/common.sh@10 -- # local filename= 00:26:42.455 13:50:21 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:42.455 13:50:21 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:42.455 13:50:21 -- bdevperf/common.sh@19 -- # echo 00:26:42.455 00:26:42.455 13:50:21 -- bdevperf/common.sh@20 -- # cat 00:26:42.456 13:50:21 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:42.456 13:50:21 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:42.456 13:50:21 -- bdevperf/common.sh@9 -- # local rw= 00:26:42.456 13:50:21 -- bdevperf/common.sh@10 -- # local filename= 00:26:42.456 13:50:21 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:42.456 13:50:21 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:42.456 13:50:21 -- bdevperf/common.sh@19 -- # echo 00:26:42.456 00:26:42.456 13:50:21 -- bdevperf/common.sh@20 -- # cat 00:26:42.456 13:50:21 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:42.456 13:50:21 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:42.456 13:50:21 -- bdevperf/common.sh@9 -- # local rw= 00:26:42.456 13:50:21 -- bdevperf/common.sh@10 -- # local filename= 00:26:42.456 13:50:21 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:42.456 13:50:21 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:42.456 13:50:21 -- bdevperf/common.sh@19 -- # echo 00:26:42.456 00:26:42.456 13:50:21 -- bdevperf/common.sh@20 -- # cat 00:26:42.456 13:50:21 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:42.456 13:50:21 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:42.456 13:50:21 -- bdevperf/common.sh@9 -- # local rw= 00:26:42.456 13:50:21 -- bdevperf/common.sh@10 -- # local filename= 00:26:42.456 13:50:21 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:42.456 13:50:21 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:42.456 00:26:42.456 13:50:21 -- bdevperf/common.sh@19 -- # echo 00:26:42.456 13:50:21 -- bdevperf/common.sh@20 -- # cat 00:26:42.456 13:50:21 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:46.653 13:50:25 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-10 13:50:21.349166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:46.654 [2024-07-10 13:50:21.349344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136516 ] 00:26:46.654 Using job config with 4 jobs 00:26:46.654 [2024-07-10 13:50:21.511288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.654 [2024-07-10 13:50:21.718151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.654 cpumask for '\''job0'\'' is too big 00:26:46.654 cpumask for '\''job1'\'' is too big 00:26:46.654 cpumask for '\''job2'\'' is too big 00:26:46.654 cpumask for '\''job3'\'' is too big 00:26:46.654 Running I/O for 2 seconds... 00:26:46.654 00:26:46.654 Latency(us) 00:26:46.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.03 16397.52 16.01 0.00 0.00 15600.62 3047.85 25985.45 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.03 16386.22 16.00 0.00 0.00 15600.38 3577.29 26214.40 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.03 16374.68 15.99 0.00 0.00 15565.80 2919.07 23238.09 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.03 16363.64 15.98 0.00 0.00 15566.46 3777.62 23009.15 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.04 16352.82 15.97 0.00 0.00 15530.45 3205.25 19689.42 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.04 16341.72 15.96 0.00 0.00 15529.69 3863.48 19574.94 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.04 16330.62 15.95 0.00 0.00 15494.70 2861.83 18773.63 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.04 16319.09 15.94 0.00 0.00 15493.08 3405.58 18773.63 00:26:46.654 =================================================================================================================== 00:26:46.654 Total : 130866.31 127.80 0.00 0.00 15547.65 2861.83 26214.40' 00:26:46.654 13:50:25 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-10 13:50:21.349166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:46.654 [2024-07-10 13:50:21.349344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136516 ] 00:26:46.654 Using job config with 4 jobs 00:26:46.654 [2024-07-10 13:50:21.511288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.654 [2024-07-10 13:50:21.718151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.654 cpumask for '\''job0'\'' is too big 00:26:46.654 cpumask for '\''job1'\'' is too big 00:26:46.654 cpumask for '\''job2'\'' is too big 00:26:46.654 cpumask for '\''job3'\'' is too big 00:26:46.654 Running I/O for 2 seconds... 00:26:46.654 00:26:46.654 Latency(us) 00:26:46.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.03 16397.52 16.01 0.00 0.00 15600.62 3047.85 25985.45 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.03 16386.22 16.00 0.00 0.00 15600.38 3577.29 26214.40 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.03 16374.68 15.99 0.00 0.00 15565.80 2919.07 23238.09 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.03 16363.64 15.98 0.00 0.00 15566.46 3777.62 23009.15 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.04 16352.82 15.97 0.00 0.00 15530.45 3205.25 19689.42 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.04 16341.72 15.96 0.00 0.00 15529.69 3863.48 19574.94 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.04 16330.62 15.95 0.00 0.00 15494.70 2861.83 18773.63 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.04 16319.09 15.94 0.00 0.00 15493.08 3405.58 18773.63 00:26:46.654 =================================================================================================================== 00:26:46.654 Total : 130866.31 127.80 0.00 0.00 15547.65 2861.83 26214.40' 00:26:46.654 13:50:25 -- bdevperf/common.sh@32 -- # echo '[2024-07-10 13:50:21.349166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:46.654 [2024-07-10 13:50:21.349344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136516 ] 00:26:46.654 Using job config with 4 jobs 00:26:46.654 [2024-07-10 13:50:21.511288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.654 [2024-07-10 13:50:21.718151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.654 cpumask for '\''job0'\'' is too big 00:26:46.654 cpumask for '\''job1'\'' is too big 00:26:46.654 cpumask for '\''job2'\'' is too big 00:26:46.654 cpumask for '\''job3'\'' is too big 00:26:46.654 Running I/O for 2 seconds... 00:26:46.654 00:26:46.654 Latency(us) 00:26:46.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.03 16397.52 16.01 0.00 0.00 15600.62 3047.85 25985.45 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.03 16386.22 16.00 0.00 0.00 15600.38 3577.29 26214.40 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.03 16374.68 15.99 0.00 0.00 15565.80 2919.07 23238.09 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.03 16363.64 15.98 0.00 0.00 15566.46 3777.62 23009.15 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.04 16352.82 15.97 0.00 0.00 15530.45 3205.25 19689.42 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.04 16341.72 15.96 0.00 0.00 15529.69 3863.48 19574.94 00:26:46.654 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc0 : 2.04 16330.62 15.95 0.00 0.00 15494.70 2861.83 18773.63 00:26:46.654 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:46.654 Malloc1 : 2.04 16319.09 15.94 0.00 0.00 15493.08 3405.58 18773.63 00:26:46.654 =================================================================================================================== 00:26:46.654 Total : 130866.31 127.80 0.00 0.00 15547.65 2861.83 26214.40' 00:26:46.654 13:50:25 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:46.654 13:50:25 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:46.654 13:50:25 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:46.654 13:50:25 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:46.654 13:50:25 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:46.654 13:50:25 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:46.654 00:26:46.654 real 0m17.992s 00:26:46.654 user 0m16.438s 00:26:46.654 sys 0m1.003s 00:26:46.654 13:50:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.654 13:50:25 -- common/autotest_common.sh@10 -- # set +x 00:26:46.654 ************************************ 00:26:46.654 END TEST bdevperf_config 00:26:46.654 ************************************ 00:26:46.654 13:50:25 -- spdk/autotest.sh@198 -- # uname -s 00:26:46.654 13:50:25 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:46.654 13:50:25 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:46.654 13:50:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:46.654 13:50:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:46.654 13:50:25 -- common/autotest_common.sh@10 -- # set +x 00:26:46.654 ************************************ 00:26:46.654 START TEST reactor_set_interrupt 00:26:46.654 ************************************ 00:26:46.654 13:50:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:46.654 * Looking for test storage... 00:26:46.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.654 13:50:25 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:46.654 13:50:25 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:46.654 13:50:25 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.654 13:50:25 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.654 13:50:25 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:46.654 13:50:25 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:46.654 13:50:25 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:46.654 13:50:26 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:46.654 13:50:26 -- common/autotest_common.sh@34 -- # set -e 00:26:46.654 13:50:26 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:46.654 13:50:26 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:46.654 13:50:26 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:46.654 13:50:26 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:46.654 13:50:26 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:46.654 13:50:26 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:46.654 13:50:26 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:46.654 13:50:26 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:46.654 13:50:26 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:46.654 13:50:26 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:46.654 13:50:26 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:46.654 13:50:26 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:46.654 13:50:26 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:46.655 13:50:26 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:46.655 13:50:26 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:46.655 13:50:26 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:46.655 13:50:26 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:46.655 13:50:26 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:46.655 13:50:26 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:46.655 13:50:26 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:46.655 13:50:26 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:46.655 13:50:26 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:46.655 13:50:26 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:46.655 13:50:26 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:46.655 13:50:26 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:46.655 13:50:26 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:46.655 13:50:26 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:46.655 13:50:26 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:46.655 13:50:26 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:46.655 13:50:26 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:46.655 13:50:26 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:46.655 13:50:26 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:46.655 13:50:26 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:46.655 13:50:26 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:46.655 13:50:26 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:46.655 13:50:26 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:46.655 13:50:26 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:46.655 13:50:26 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:46.655 13:50:26 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:46.655 13:50:26 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:46.655 13:50:26 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:46.655 13:50:26 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:46.655 13:50:26 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:46.655 13:50:26 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:46.655 13:50:26 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:46.655 13:50:26 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:46.655 13:50:26 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:46.655 13:50:26 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:46.655 13:50:26 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:46.655 13:50:26 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:46.655 13:50:26 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:46.655 13:50:26 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:46.655 13:50:26 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:46.655 13:50:26 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:46.655 13:50:26 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:46.655 13:50:26 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:46.655 13:50:26 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:46.655 13:50:26 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:46.655 13:50:26 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:46.655 13:50:26 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:46.655 13:50:26 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:46.655 13:50:26 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:46.655 13:50:26 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:46.655 13:50:26 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:46.655 13:50:26 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:46.655 13:50:26 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:46.655 13:50:26 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:46.655 13:50:26 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:46.655 13:50:26 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:46.655 13:50:26 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:46.655 13:50:26 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:46.655 13:50:26 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:46.655 13:50:26 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:46.655 13:50:26 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:46.655 13:50:26 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:46.655 13:50:26 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:46.655 13:50:26 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:46.917 13:50:26 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:46.917 13:50:26 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:46.917 13:50:26 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:46.917 13:50:26 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:46.917 13:50:26 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:46.917 13:50:26 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:46.917 13:50:26 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:46.917 13:50:26 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:46.917 13:50:26 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:46.917 13:50:26 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:46.917 13:50:26 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:46.917 13:50:26 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:46.917 13:50:26 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:46.917 13:50:26 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:46.917 13:50:26 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:46.917 13:50:26 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:46.917 #define SPDK_CONFIG_H 00:26:46.917 #define SPDK_CONFIG_APPS 1 00:26:46.917 #define SPDK_CONFIG_ARCH native 00:26:46.917 #define SPDK_CONFIG_ASAN 1 00:26:46.917 #undef SPDK_CONFIG_AVAHI 00:26:46.917 #undef SPDK_CONFIG_CET 00:26:46.917 #define SPDK_CONFIG_COVERAGE 1 00:26:46.917 #define SPDK_CONFIG_CROSS_PREFIX 00:26:46.917 #undef SPDK_CONFIG_CRYPTO 00:26:46.917 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:46.917 #undef SPDK_CONFIG_CUSTOMOCF 00:26:46.917 #undef SPDK_CONFIG_DAOS 00:26:46.917 #define SPDK_CONFIG_DAOS_DIR 00:26:46.917 #define SPDK_CONFIG_DEBUG 1 00:26:46.917 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:46.917 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:46.917 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:46.917 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:46.917 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:46.917 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:46.917 #define SPDK_CONFIG_EXAMPLES 1 00:26:46.917 #undef SPDK_CONFIG_FC 00:26:46.917 #define SPDK_CONFIG_FC_PATH 00:26:46.917 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:46.917 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:46.917 #undef SPDK_CONFIG_FUSE 00:26:46.917 #undef SPDK_CONFIG_FUZZER 00:26:46.917 #define SPDK_CONFIG_FUZZER_LIB 00:26:46.917 #undef SPDK_CONFIG_GOLANG 00:26:46.917 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:46.917 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:46.917 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:46.917 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:46.917 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:46.917 #define SPDK_CONFIG_IDXD 1 00:26:46.917 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:46.917 #undef SPDK_CONFIG_IPSEC_MB 00:26:46.917 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:46.917 #define SPDK_CONFIG_ISAL 1 00:26:46.917 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:46.917 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:46.917 #define SPDK_CONFIG_LIBDIR 00:26:46.917 #undef SPDK_CONFIG_LTO 00:26:46.917 #define SPDK_CONFIG_MAX_LCORES 00:26:46.917 #define SPDK_CONFIG_NVME_CUSE 1 00:26:46.917 #undef SPDK_CONFIG_OCF 00:26:46.917 #define SPDK_CONFIG_OCF_PATH 00:26:46.917 #define SPDK_CONFIG_OPENSSL_PATH 00:26:46.917 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:46.917 #undef SPDK_CONFIG_PGO_USE 00:26:46.917 #define SPDK_CONFIG_PREFIX /usr/local 00:26:46.917 #define SPDK_CONFIG_RAID5F 1 00:26:46.917 #undef SPDK_CONFIG_RBD 00:26:46.917 #define SPDK_CONFIG_RDMA 1 00:26:46.917 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:46.917 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:46.917 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:46.917 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:46.917 #undef SPDK_CONFIG_SHARED 00:26:46.917 #undef SPDK_CONFIG_SMA 00:26:46.917 #define SPDK_CONFIG_TESTS 1 00:26:46.917 #undef SPDK_CONFIG_TSAN 00:26:46.917 #undef SPDK_CONFIG_UBLK 00:26:46.917 #define SPDK_CONFIG_UBSAN 1 00:26:46.917 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:46.917 #undef SPDK_CONFIG_URING 00:26:46.917 #define SPDK_CONFIG_URING_PATH 00:26:46.917 #undef SPDK_CONFIG_URING_ZNS 00:26:46.917 #undef SPDK_CONFIG_USDT 00:26:46.917 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:46.917 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:46.917 #undef SPDK_CONFIG_VFIO_USER 00:26:46.917 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:46.917 #define SPDK_CONFIG_VHOST 1 00:26:46.917 #define SPDK_CONFIG_VIRTIO 1 00:26:46.917 #undef SPDK_CONFIG_VTUNE 00:26:46.917 #define SPDK_CONFIG_VTUNE_DIR 00:26:46.917 #define SPDK_CONFIG_WERROR 1 00:26:46.917 #define SPDK_CONFIG_WPDK_DIR 00:26:46.917 #undef SPDK_CONFIG_XNVME 00:26:46.917 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:46.917 13:50:26 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:46.917 13:50:26 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:46.917 13:50:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.917 13:50:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.917 13:50:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.917 13:50:26 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:46.917 13:50:26 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:46.917 13:50:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:46.917 13:50:26 -- paths/export.sh@5 -- # export PATH 00:26:46.917 13:50:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:46.917 13:50:26 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:46.917 13:50:26 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:46.917 13:50:26 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:46.917 13:50:26 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:46.917 13:50:26 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:46.917 13:50:26 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:46.917 13:50:26 -- pm/common@16 -- # TEST_TAG=N/A 00:26:46.917 13:50:26 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:46.917 13:50:26 -- common/autotest_common.sh@52 -- # : 1 00:26:46.917 13:50:26 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:46.917 13:50:26 -- common/autotest_common.sh@56 -- # : 0 00:26:46.917 13:50:26 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:46.917 13:50:26 -- common/autotest_common.sh@58 -- # : 0 00:26:46.917 13:50:26 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:46.917 13:50:26 -- common/autotest_common.sh@60 -- # : 1 00:26:46.917 13:50:26 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:46.917 13:50:26 -- common/autotest_common.sh@62 -- # : 1 00:26:46.917 13:50:26 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:46.917 13:50:26 -- common/autotest_common.sh@64 -- # : 00:26:46.918 13:50:26 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:46.918 13:50:26 -- common/autotest_common.sh@66 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:46.918 13:50:26 -- common/autotest_common.sh@68 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:46.918 13:50:26 -- common/autotest_common.sh@70 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:46.918 13:50:26 -- common/autotest_common.sh@72 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:46.918 13:50:26 -- common/autotest_common.sh@74 -- # : 1 00:26:46.918 13:50:26 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:46.918 13:50:26 -- common/autotest_common.sh@76 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:46.918 13:50:26 -- common/autotest_common.sh@78 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:46.918 13:50:26 -- common/autotest_common.sh@80 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:46.918 13:50:26 -- common/autotest_common.sh@82 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:46.918 13:50:26 -- common/autotest_common.sh@84 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:46.918 13:50:26 -- common/autotest_common.sh@86 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:46.918 13:50:26 -- common/autotest_common.sh@88 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:46.918 13:50:26 -- common/autotest_common.sh@90 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:46.918 13:50:26 -- common/autotest_common.sh@92 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:46.918 13:50:26 -- common/autotest_common.sh@94 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:46.918 13:50:26 -- common/autotest_common.sh@96 -- # : rdma 00:26:46.918 13:50:26 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:46.918 13:50:26 -- common/autotest_common.sh@98 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:46.918 13:50:26 -- common/autotest_common.sh@100 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:46.918 13:50:26 -- common/autotest_common.sh@102 -- # : 1 00:26:46.918 13:50:26 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:46.918 13:50:26 -- common/autotest_common.sh@104 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:46.918 13:50:26 -- common/autotest_common.sh@106 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:46.918 13:50:26 -- common/autotest_common.sh@108 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:46.918 13:50:26 -- common/autotest_common.sh@110 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:46.918 13:50:26 -- common/autotest_common.sh@112 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:46.918 13:50:26 -- common/autotest_common.sh@114 -- # : 1 00:26:46.918 13:50:26 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:46.918 13:50:26 -- common/autotest_common.sh@116 -- # : 1 00:26:46.918 13:50:26 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:46.918 13:50:26 -- common/autotest_common.sh@118 -- # : 00:26:46.918 13:50:26 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:46.918 13:50:26 -- common/autotest_common.sh@120 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:46.918 13:50:26 -- common/autotest_common.sh@122 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:46.918 13:50:26 -- common/autotest_common.sh@124 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:46.918 13:50:26 -- common/autotest_common.sh@126 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:46.918 13:50:26 -- common/autotest_common.sh@128 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:46.918 13:50:26 -- common/autotest_common.sh@130 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:46.918 13:50:26 -- common/autotest_common.sh@132 -- # : 00:26:46.918 13:50:26 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:46.918 13:50:26 -- common/autotest_common.sh@134 -- # : true 00:26:46.918 13:50:26 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:46.918 13:50:26 -- common/autotest_common.sh@136 -- # : 1 00:26:46.918 13:50:26 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:46.918 13:50:26 -- common/autotest_common.sh@138 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:46.918 13:50:26 -- common/autotest_common.sh@140 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:46.918 13:50:26 -- common/autotest_common.sh@142 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:46.918 13:50:26 -- common/autotest_common.sh@144 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:46.918 13:50:26 -- common/autotest_common.sh@146 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:46.918 13:50:26 -- common/autotest_common.sh@148 -- # : 00:26:46.918 13:50:26 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:46.918 13:50:26 -- common/autotest_common.sh@150 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:46.918 13:50:26 -- common/autotest_common.sh@152 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:46.918 13:50:26 -- common/autotest_common.sh@154 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:46.918 13:50:26 -- common/autotest_common.sh@156 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:46.918 13:50:26 -- common/autotest_common.sh@158 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:46.918 13:50:26 -- common/autotest_common.sh@160 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:46.918 13:50:26 -- common/autotest_common.sh@163 -- # : 00:26:46.918 13:50:26 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:46.918 13:50:26 -- common/autotest_common.sh@165 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:46.918 13:50:26 -- common/autotest_common.sh@167 -- # : 0 00:26:46.918 13:50:26 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:46.918 13:50:26 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:46.918 13:50:26 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:46.918 13:50:26 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:46.918 13:50:26 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:46.918 13:50:26 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:46.918 13:50:26 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:46.918 13:50:26 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:46.918 13:50:26 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:46.918 13:50:26 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:46.918 13:50:26 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:46.918 13:50:26 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:46.918 13:50:26 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:46.918 13:50:26 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:46.918 13:50:26 -- common/autotest_common.sh@196 -- # cat 00:26:46.918 13:50:26 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:46.918 13:50:26 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:46.918 13:50:26 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:46.918 13:50:26 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:46.918 13:50:26 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:46.918 13:50:26 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:46.918 13:50:26 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:46.918 13:50:26 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:46.918 13:50:26 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:46.918 13:50:26 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:46.918 13:50:26 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:46.918 13:50:26 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:46.918 13:50:26 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:46.918 13:50:26 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:46.918 13:50:26 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:46.918 13:50:26 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:46.918 13:50:26 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:46.919 13:50:26 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:46.919 13:50:26 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:46.919 13:50:26 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:46.919 13:50:26 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:46.919 13:50:26 -- common/autotest_common.sh@249 -- # valgrind= 00:26:46.919 13:50:26 -- common/autotest_common.sh@255 -- # uname -s 00:26:46.919 13:50:26 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:46.919 13:50:26 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:46.919 13:50:26 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:46.919 13:50:26 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:46.919 13:50:26 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:46.919 13:50:26 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:46.919 13:50:26 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:46.919 13:50:26 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:46.919 13:50:26 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:46.919 13:50:26 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:46.919 13:50:26 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:46.919 13:50:26 -- common/autotest_common.sh@309 -- # [[ -z 136607 ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@309 -- # kill -0 136607 00:26:46.919 13:50:26 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:46.919 13:50:26 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:46.919 13:50:26 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:46.919 13:50:26 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:46.919 13:50:26 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:46.919 13:50:26 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:46.919 13:50:26 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:46.919 13:50:26 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.haW5Uw 00:26:46.919 13:50:26 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:46.919 13:50:26 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.haW5Uw/tests/interrupt /tmp/spdk.haW5Uw 00:26:46.919 13:50:26 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@318 -- # df -T 00:26:46.919 13:50:26 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224457728 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224457728 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=10613776384 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=9986240512 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269964288 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272557056 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=94304456704 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=5398323200 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:46.919 13:50:26 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:46.919 13:50:26 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:46.919 13:50:26 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:46.919 13:50:26 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:46.919 13:50:26 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:46.919 * Looking for test storage... 00:26:46.919 13:50:26 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:46.919 13:50:26 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:46.919 13:50:26 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.919 13:50:26 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:46.919 13:50:26 -- common/autotest_common.sh@363 -- # mount=/ 00:26:46.919 13:50:26 -- common/autotest_common.sh@365 -- # target_space=10613776384 00:26:46.919 13:50:26 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:46.919 13:50:26 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:46.919 13:50:26 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:46.919 13:50:26 -- common/autotest_common.sh@372 -- # new_size=12200833024 00:26:46.919 13:50:26 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:46.919 13:50:26 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.919 13:50:26 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.919 13:50:26 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:46.919 13:50:26 -- common/autotest_common.sh@380 -- # return 0 00:26:46.919 13:50:26 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:46.919 13:50:26 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:46.919 13:50:26 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:46.919 13:50:26 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:46.919 13:50:26 -- common/autotest_common.sh@1672 -- # true 00:26:46.920 13:50:26 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:46.920 13:50:26 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:46.920 13:50:26 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:46.920 13:50:26 -- common/autotest_common.sh@27 -- # exec 00:26:46.920 13:50:26 -- common/autotest_common.sh@29 -- # exec 00:26:46.920 13:50:26 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:46.920 13:50:26 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:46.920 13:50:26 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:46.920 13:50:26 -- common/autotest_common.sh@18 -- # set -x 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:46.920 13:50:26 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:46.920 13:50:26 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:46.920 13:50:26 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136647 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:46.920 13:50:26 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136647 /var/tmp/spdk.sock 00:26:46.920 13:50:26 -- common/autotest_common.sh@819 -- # '[' -z 136647 ']' 00:26:46.920 13:50:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.920 13:50:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:46.920 13:50:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.920 13:50:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:46.920 13:50:26 -- common/autotest_common.sh@10 -- # set +x 00:26:46.920 [2024-07-10 13:50:26.170631] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:46.920 [2024-07-10 13:50:26.170755] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136647 ] 00:26:47.180 [2024-07-10 13:50:26.335367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:47.440 [2024-07-10 13:50:26.539813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.440 [2024-07-10 13:50:26.539911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.440 [2024-07-10 13:50:26.539914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.699 [2024-07-10 13:50:26.852615] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:47.699 13:50:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:47.699 13:50:26 -- common/autotest_common.sh@852 -- # return 0 00:26:47.699 13:50:26 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:47.699 13:50:26 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:48.289 Malloc0 00:26:48.289 Malloc1 00:26:48.289 Malloc2 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:48.289 5000+0 records in 00:26:48.289 5000+0 records out 00:26:48.289 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0249164 s, 411 MB/s 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:48.289 AIO0 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 136647 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 136647 without_thd 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136647 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:48.289 13:50:27 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:48.289 13:50:27 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:48.550 13:50:27 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:48.550 13:50:27 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:48.550 13:50:27 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:48.808 13:50:27 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:48.808 spdk_thread ids are 1 on reactor0. 00:26:48.808 13:50:27 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:48.808 13:50:27 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:48.808 13:50:27 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136647 0 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136647 0 idle 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:48.808 13:50:27 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136647 root 20 0 20.1t 145380 28500 S 0.0 1.2 0:00.77 reactor_0' 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@48 -- # echo 136647 root 20 0 20.1t 145380 28500 S 0.0 1.2 0:00.77 reactor_0 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:48.808 13:50:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:48.808 13:50:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136647 1 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136647 1 idle 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:48.808 13:50:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136650 root 20 0 20.1t 145380 28500 S 0.0 1.2 0:00.00 reactor_1' 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@48 -- # echo 136650 root 20 0 20.1t 145380 28500 S 0.0 1.2 0:00.00 reactor_1 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:49.066 13:50:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:49.066 13:50:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136647 2 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136647 2 idle 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:49.066 13:50:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136652 root 20 0 20.1t 145380 28500 S 0.0 1.2 0:00.00 reactor_2' 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@48 -- # echo 136652 root 20 0 20.1t 145380 28500 S 0.0 1.2 0:00.00 reactor_2 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:49.323 13:50:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:49.323 13:50:28 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:49.323 13:50:28 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:49.323 13:50:28 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:49.581 [2024-07-10 13:50:28.689439] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:49.581 13:50:28 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:49.581 [2024-07-10 13:50:28.889082] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:49.581 [2024-07-10 13:50:28.890116] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:49.581 13:50:28 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:49.840 [2024-07-10 13:50:29.076933] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:49.840 [2024-07-10 13:50:29.078020] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:49.840 13:50:29 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:49.840 13:50:29 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136647 0 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136647 0 busy 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:49.840 13:50:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136647 root 20 0 20.1t 145484 28500 R 99.9 1.2 0:01.13 reactor_0' 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # echo 136647 root 20 0 20.1t 145484 28500 R 99.9 1.2 0:01.13 reactor_0 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:50.099 13:50:29 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:50.099 13:50:29 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136647 2 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136647 2 busy 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136652 root 20 0 20.1t 145484 28500 R 99.9 1.2 0:00.34 reactor_2' 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # echo 136652 root 20 0 20.1t 145484 28500 R 99.9 1.2 0:00.34 reactor_2 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:50.099 13:50:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:50.099 13:50:29 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:50.358 [2024-07-10 13:50:29.608959] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:50.358 [2024-07-10 13:50:29.609783] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:50.358 13:50:29 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:50.358 13:50:29 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136647 2 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136647 2 idle 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:50.358 13:50:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136652 root 20 0 20.1t 145552 28500 S 0.0 1.2 0:00.53 reactor_2' 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@48 -- # echo 136652 root 20 0 20.1t 145552 28500 S 0.0 1.2 0:00.53 reactor_2 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:50.617 13:50:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:50.617 13:50:29 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:50.617 [2024-07-10 13:50:29.956947] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:50.617 [2024-07-10 13:50:29.958289] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:50.617 13:50:29 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:50.617 13:50:29 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:50.617 13:50:29 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:50.876 [2024-07-10 13:50:30.133331] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:50.876 13:50:30 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136647 0 00:26:50.876 13:50:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136647 0 idle 00:26:50.876 13:50:30 -- interrupt/interrupt_common.sh@33 -- # local pid=136647 00:26:50.876 13:50:30 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136647 -w 256 00:26:50.877 13:50:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136647 root 20 0 20.1t 145644 28500 S 0.0 1.2 0:01.84 reactor_0' 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@48 -- # echo 136647 root 20 0 20.1t 145644 28500 S 0.0 1.2 0:01.84 reactor_0 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:51.135 13:50:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:51.135 13:50:30 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:51.135 13:50:30 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:51.135 13:50:30 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:51.135 13:50:30 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 136647 00:26:51.135 13:50:30 -- common/autotest_common.sh@926 -- # '[' -z 136647 ']' 00:26:51.135 13:50:30 -- common/autotest_common.sh@930 -- # kill -0 136647 00:26:51.135 13:50:30 -- common/autotest_common.sh@931 -- # uname 00:26:51.135 13:50:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:51.135 13:50:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136647 00:26:51.135 13:50:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:51.135 13:50:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:51.135 13:50:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136647' 00:26:51.135 killing process with pid 136647 00:26:51.135 13:50:30 -- common/autotest_common.sh@945 -- # kill 136647 00:26:51.135 13:50:30 -- common/autotest_common.sh@950 -- # wait 136647 00:26:52.516 13:50:31 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:52.516 13:50:31 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136813 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:52.516 13:50:31 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136813 /var/tmp/spdk.sock 00:26:52.516 13:50:31 -- common/autotest_common.sh@819 -- # '[' -z 136813 ']' 00:26:52.516 13:50:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.516 13:50:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:52.516 13:50:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.516 13:50:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:52.516 13:50:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.776 [2024-07-10 13:50:31.877162] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.776 [2024-07-10 13:50:31.877323] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136813 ] 00:26:52.776 [2024-07-10 13:50:32.040307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.035 [2024-07-10 13:50:32.236468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.035 [2024-07-10 13:50:32.236648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.035 [2024-07-10 13:50:32.236656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.294 [2024-07-10 13:50:32.522038] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:53.554 13:50:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:53.554 13:50:32 -- common/autotest_common.sh@852 -- # return 0 00:26:53.554 13:50:32 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:53.554 13:50:32 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:53.820 Malloc0 00:26:53.820 Malloc1 00:26:53.820 Malloc2 00:26:53.820 13:50:33 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:53.820 13:50:33 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:53.820 13:50:33 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:53.820 13:50:33 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:53.820 5000+0 records in 00:26:53.820 5000+0 records out 00:26:53.820 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0249345 s, 411 MB/s 00:26:53.820 13:50:33 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:54.082 AIO0 00:26:54.082 13:50:33 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 136813 00:26:54.082 13:50:33 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 136813 00:26:54.082 13:50:33 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136813 00:26:54.082 13:50:33 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:54.082 13:50:33 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:54.082 13:50:33 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:54.082 13:50:33 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:54.082 13:50:33 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:54.082 13:50:33 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:54.082 13:50:33 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.082 13:50:33 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:54.082 13:50:33 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:54.342 13:50:33 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:54.342 13:50:33 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:54.342 spdk_thread ids are 1 on reactor0. 00:26:54.342 13:50:33 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:54.342 13:50:33 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:54.342 13:50:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:54.342 13:50:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136813 0 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136813 0 idle 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:54.342 13:50:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136813 root 20 0 20.1t 145388 28528 S 6.7 1.2 0:00.72 reactor_0' 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@48 -- # echo 136813 root 20 0 20.1t 145388 28528 S 6.7 1.2 0:00.72 reactor_0 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:54.601 13:50:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:54.601 13:50:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136813 1 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136813 1 idle 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:54.601 13:50:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136818 root 20 0 20.1t 145388 28528 S 0.0 1.2 0:00.00 reactor_1' 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@48 -- # echo 136818 root 20 0 20.1t 145388 28528 S 0.0 1.2 0:00.00 reactor_1 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:54.861 13:50:33 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:54.861 13:50:33 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136813 2 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136813 2 idle 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:54.861 13:50:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136819 root 20 0 20.1t 145388 28528 S 0.0 1.2 0:00.00 reactor_2' 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@48 -- # echo 136819 root 20 0 20.1t 145388 28528 S 0.0 1.2 0:00.00 reactor_2 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:54.861 13:50:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:54.861 13:50:34 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:54.861 13:50:34 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:55.121 [2024-07-10 13:50:34.342382] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:55.121 [2024-07-10 13:50:34.342691] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:55.121 [2024-07-10 13:50:34.343311] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:55.121 13:50:34 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:55.380 [2024-07-10 13:50:34.521809] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:55.380 [2024-07-10 13:50:34.522457] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:55.380 13:50:34 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:55.380 13:50:34 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136813 0 00:26:55.380 13:50:34 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136813 0 busy 00:26:55.380 13:50:34 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:55.380 13:50:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:55.380 13:50:34 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:55.380 13:50:34 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:55.380 13:50:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136813 root 20 0 20.1t 145464 28528 R 99.9 1.2 0:01.08 reactor_0' 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@48 -- # echo 136813 root 20 0 20.1t 145464 28528 R 99.9 1.2 0:01.08 reactor_0 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:55.381 13:50:34 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:55.381 13:50:34 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136813 2 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136813 2 busy 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:55.381 13:50:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136819 root 20 0 20.1t 145464 28528 R 99.9 1.2 0:00.35 reactor_2' 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@48 -- # echo 136819 root 20 0 20.1t 145464 28528 R 99.9 1.2 0:00.35 reactor_2 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:55.640 13:50:34 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:55.640 13:50:34 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:55.970 [2024-07-10 13:50:35.057057] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:55.970 [2024-07-10 13:50:35.057738] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:55.971 13:50:35 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:55.971 13:50:35 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136813 2 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136813 2 idle 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136819 root 20 0 20.1t 145536 28528 S 0.0 1.2 0:00.53 reactor_2' 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@48 -- # echo 136819 root 20 0 20.1t 145536 28528 S 0.0 1.2 0:00.53 reactor_2 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:55.971 13:50:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:55.971 13:50:35 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:56.230 [2024-07-10 13:50:35.416369] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:56.230 [2024-07-10 13:50:35.417044] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:56.230 [2024-07-10 13:50:35.417146] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:56.230 13:50:35 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:56.230 13:50:35 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136813 0 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136813 0 idle 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@33 -- # local pid=136813 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136813 -w 256 00:26:56.230 13:50:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136813 root 20 0 20.1t 145576 28528 S 0.0 1.2 0:01.80 reactor_0' 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@48 -- # echo 136813 root 20 0 20.1t 145576 28528 S 0.0 1.2 0:01.80 reactor_0 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:56.489 13:50:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:56.489 13:50:35 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:56.489 13:50:35 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:56.489 13:50:35 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:56.489 13:50:35 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 136813 00:26:56.489 13:50:35 -- common/autotest_common.sh@926 -- # '[' -z 136813 ']' 00:26:56.489 13:50:35 -- common/autotest_common.sh@930 -- # kill -0 136813 00:26:56.489 13:50:35 -- common/autotest_common.sh@931 -- # uname 00:26:56.489 13:50:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:56.489 13:50:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136813 00:26:56.489 killing process with pid 136813 00:26:56.489 13:50:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:56.489 13:50:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:56.489 13:50:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136813' 00:26:56.489 13:50:35 -- common/autotest_common.sh@945 -- # kill 136813 00:26:56.489 13:50:35 -- common/autotest_common.sh@950 -- # wait 136813 00:26:57.867 13:50:37 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:57.867 13:50:37 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:57.867 ************************************ 00:26:57.867 END TEST reactor_set_interrupt 00:26:57.867 ************************************ 00:26:57.867 00:26:57.867 real 0m11.153s 00:26:57.867 user 0m11.296s 00:26:57.867 sys 0m1.539s 00:26:57.867 13:50:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.867 13:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 13:50:37 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:57.867 13:50:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:57.867 13:50:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:57.867 13:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 ************************************ 00:26:57.867 START TEST reap_unregistered_poller 00:26:57.867 ************************************ 00:26:57.867 13:50:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:57.867 * Looking for test storage... 00:26:58.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.128 13:50:37 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:58.128 13:50:37 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:58.128 13:50:37 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.128 13:50:37 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.128 13:50:37 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:58.128 13:50:37 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:58.128 13:50:37 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:58.128 13:50:37 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:58.128 13:50:37 -- common/autotest_common.sh@34 -- # set -e 00:26:58.128 13:50:37 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:58.128 13:50:37 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:58.128 13:50:37 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:58.128 13:50:37 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:58.128 13:50:37 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:58.128 13:50:37 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:58.128 13:50:37 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:58.128 13:50:37 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:58.128 13:50:37 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:58.128 13:50:37 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:58.128 13:50:37 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:58.128 13:50:37 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:58.128 13:50:37 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:58.128 13:50:37 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:58.128 13:50:37 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:58.128 13:50:37 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:58.128 13:50:37 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:58.128 13:50:37 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:58.129 13:50:37 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:58.129 13:50:37 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:58.129 13:50:37 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:58.129 13:50:37 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:58.129 13:50:37 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:58.129 13:50:37 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:58.129 13:50:37 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:58.129 13:50:37 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:58.129 13:50:37 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:58.129 13:50:37 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:58.129 13:50:37 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:58.129 13:50:37 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:58.129 13:50:37 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:58.129 13:50:37 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:58.129 13:50:37 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:58.129 13:50:37 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:58.129 13:50:37 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:58.129 13:50:37 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:58.129 13:50:37 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:58.129 13:50:37 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:58.129 13:50:37 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:58.129 13:50:37 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:58.129 13:50:37 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:58.129 13:50:37 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:58.129 13:50:37 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:58.129 13:50:37 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:58.129 13:50:37 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:58.129 13:50:37 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:58.129 13:50:37 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:58.129 13:50:37 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:58.129 13:50:37 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:58.129 13:50:37 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:58.129 13:50:37 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:58.129 13:50:37 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:58.129 13:50:37 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:58.129 13:50:37 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:58.129 13:50:37 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:58.129 13:50:37 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:58.129 13:50:37 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:58.129 13:50:37 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:58.129 13:50:37 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:58.129 13:50:37 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:58.129 13:50:37 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:58.129 13:50:37 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:58.129 13:50:37 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:58.129 13:50:37 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:58.129 13:50:37 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:58.129 13:50:37 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:58.129 13:50:37 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:58.129 13:50:37 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:58.129 13:50:37 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:58.129 13:50:37 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:58.129 13:50:37 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:58.129 13:50:37 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:58.129 13:50:37 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:58.129 13:50:37 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:58.129 13:50:37 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:58.129 13:50:37 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:58.129 13:50:37 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:58.129 13:50:37 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:58.129 13:50:37 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:58.129 13:50:37 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:58.129 13:50:37 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:58.129 13:50:37 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:58.129 13:50:37 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:58.129 13:50:37 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:58.129 13:50:37 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:58.129 13:50:37 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:58.129 13:50:37 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:58.129 13:50:37 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:58.129 13:50:37 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:58.129 13:50:37 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:58.129 13:50:37 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:58.129 13:50:37 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:58.129 13:50:37 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:58.129 13:50:37 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:58.129 13:50:37 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:58.129 13:50:37 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:58.129 13:50:37 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:58.129 13:50:37 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:58.129 13:50:37 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:58.129 #define SPDK_CONFIG_H 00:26:58.129 #define SPDK_CONFIG_APPS 1 00:26:58.129 #define SPDK_CONFIG_ARCH native 00:26:58.129 #define SPDK_CONFIG_ASAN 1 00:26:58.129 #undef SPDK_CONFIG_AVAHI 00:26:58.129 #undef SPDK_CONFIG_CET 00:26:58.129 #define SPDK_CONFIG_COVERAGE 1 00:26:58.129 #define SPDK_CONFIG_CROSS_PREFIX 00:26:58.129 #undef SPDK_CONFIG_CRYPTO 00:26:58.129 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:58.129 #undef SPDK_CONFIG_CUSTOMOCF 00:26:58.129 #undef SPDK_CONFIG_DAOS 00:26:58.129 #define SPDK_CONFIG_DAOS_DIR 00:26:58.129 #define SPDK_CONFIG_DEBUG 1 00:26:58.129 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:58.129 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:58.129 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:58.129 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:58.129 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:58.129 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:58.129 #define SPDK_CONFIG_EXAMPLES 1 00:26:58.129 #undef SPDK_CONFIG_FC 00:26:58.129 #define SPDK_CONFIG_FC_PATH 00:26:58.129 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:58.129 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:58.129 #undef SPDK_CONFIG_FUSE 00:26:58.129 #undef SPDK_CONFIG_FUZZER 00:26:58.129 #define SPDK_CONFIG_FUZZER_LIB 00:26:58.129 #undef SPDK_CONFIG_GOLANG 00:26:58.129 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:58.130 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:58.130 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:58.130 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:58.130 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:58.130 #define SPDK_CONFIG_IDXD 1 00:26:58.130 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:58.130 #undef SPDK_CONFIG_IPSEC_MB 00:26:58.130 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:58.130 #define SPDK_CONFIG_ISAL 1 00:26:58.130 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:58.130 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:58.130 #define SPDK_CONFIG_LIBDIR 00:26:58.130 #undef SPDK_CONFIG_LTO 00:26:58.130 #define SPDK_CONFIG_MAX_LCORES 00:26:58.130 #define SPDK_CONFIG_NVME_CUSE 1 00:26:58.130 #undef SPDK_CONFIG_OCF 00:26:58.130 #define SPDK_CONFIG_OCF_PATH 00:26:58.130 #define SPDK_CONFIG_OPENSSL_PATH 00:26:58.130 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:58.130 #undef SPDK_CONFIG_PGO_USE 00:26:58.130 #define SPDK_CONFIG_PREFIX /usr/local 00:26:58.130 #define SPDK_CONFIG_RAID5F 1 00:26:58.130 #undef SPDK_CONFIG_RBD 00:26:58.130 #define SPDK_CONFIG_RDMA 1 00:26:58.130 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:58.130 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:58.130 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:58.130 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:58.130 #undef SPDK_CONFIG_SHARED 00:26:58.130 #undef SPDK_CONFIG_SMA 00:26:58.130 #define SPDK_CONFIG_TESTS 1 00:26:58.130 #undef SPDK_CONFIG_TSAN 00:26:58.130 #undef SPDK_CONFIG_UBLK 00:26:58.130 #define SPDK_CONFIG_UBSAN 1 00:26:58.130 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:58.130 #undef SPDK_CONFIG_URING 00:26:58.130 #define SPDK_CONFIG_URING_PATH 00:26:58.130 #undef SPDK_CONFIG_URING_ZNS 00:26:58.130 #undef SPDK_CONFIG_USDT 00:26:58.130 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:58.130 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:58.130 #undef SPDK_CONFIG_VFIO_USER 00:26:58.130 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:58.130 #define SPDK_CONFIG_VHOST 1 00:26:58.130 #define SPDK_CONFIG_VIRTIO 1 00:26:58.130 #undef SPDK_CONFIG_VTUNE 00:26:58.130 #define SPDK_CONFIG_VTUNE_DIR 00:26:58.130 #define SPDK_CONFIG_WERROR 1 00:26:58.130 #define SPDK_CONFIG_WPDK_DIR 00:26:58.130 #undef SPDK_CONFIG_XNVME 00:26:58.130 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:58.130 13:50:37 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:58.130 13:50:37 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:58.130 13:50:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.130 13:50:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.130 13:50:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.130 13:50:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:58.130 13:50:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:58.130 13:50:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:58.130 13:50:37 -- paths/export.sh@5 -- # export PATH 00:26:58.130 13:50:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:58.130 13:50:37 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:58.130 13:50:37 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:58.130 13:50:37 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:58.130 13:50:37 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:58.130 13:50:37 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:58.130 13:50:37 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:58.130 13:50:37 -- pm/common@16 -- # TEST_TAG=N/A 00:26:58.130 13:50:37 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:58.130 13:50:37 -- common/autotest_common.sh@52 -- # : 1 00:26:58.130 13:50:37 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:58.130 13:50:37 -- common/autotest_common.sh@56 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:58.130 13:50:37 -- common/autotest_common.sh@58 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:58.130 13:50:37 -- common/autotest_common.sh@60 -- # : 1 00:26:58.130 13:50:37 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:58.130 13:50:37 -- common/autotest_common.sh@62 -- # : 1 00:26:58.130 13:50:37 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:58.130 13:50:37 -- common/autotest_common.sh@64 -- # : 00:26:58.130 13:50:37 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:58.130 13:50:37 -- common/autotest_common.sh@66 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:58.130 13:50:37 -- common/autotest_common.sh@68 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:58.130 13:50:37 -- common/autotest_common.sh@70 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:58.130 13:50:37 -- common/autotest_common.sh@72 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:58.130 13:50:37 -- common/autotest_common.sh@74 -- # : 1 00:26:58.130 13:50:37 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:58.130 13:50:37 -- common/autotest_common.sh@76 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:58.130 13:50:37 -- common/autotest_common.sh@78 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:58.130 13:50:37 -- common/autotest_common.sh@80 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:58.130 13:50:37 -- common/autotest_common.sh@82 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:58.130 13:50:37 -- common/autotest_common.sh@84 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:58.130 13:50:37 -- common/autotest_common.sh@86 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:58.130 13:50:37 -- common/autotest_common.sh@88 -- # : 0 00:26:58.130 13:50:37 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:58.130 13:50:37 -- common/autotest_common.sh@90 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:58.131 13:50:37 -- common/autotest_common.sh@92 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:58.131 13:50:37 -- common/autotest_common.sh@94 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:58.131 13:50:37 -- common/autotest_common.sh@96 -- # : rdma 00:26:58.131 13:50:37 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:58.131 13:50:37 -- common/autotest_common.sh@98 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:58.131 13:50:37 -- common/autotest_common.sh@100 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:58.131 13:50:37 -- common/autotest_common.sh@102 -- # : 1 00:26:58.131 13:50:37 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:58.131 13:50:37 -- common/autotest_common.sh@104 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:58.131 13:50:37 -- common/autotest_common.sh@106 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:58.131 13:50:37 -- common/autotest_common.sh@108 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:58.131 13:50:37 -- common/autotest_common.sh@110 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:58.131 13:50:37 -- common/autotest_common.sh@112 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:58.131 13:50:37 -- common/autotest_common.sh@114 -- # : 1 00:26:58.131 13:50:37 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:58.131 13:50:37 -- common/autotest_common.sh@116 -- # : 1 00:26:58.131 13:50:37 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:58.131 13:50:37 -- common/autotest_common.sh@118 -- # : 00:26:58.131 13:50:37 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:58.131 13:50:37 -- common/autotest_common.sh@120 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:58.131 13:50:37 -- common/autotest_common.sh@122 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:58.131 13:50:37 -- common/autotest_common.sh@124 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:58.131 13:50:37 -- common/autotest_common.sh@126 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:58.131 13:50:37 -- common/autotest_common.sh@128 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:58.131 13:50:37 -- common/autotest_common.sh@130 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:58.131 13:50:37 -- common/autotest_common.sh@132 -- # : 00:26:58.131 13:50:37 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:58.131 13:50:37 -- common/autotest_common.sh@134 -- # : true 00:26:58.131 13:50:37 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:58.131 13:50:37 -- common/autotest_common.sh@136 -- # : 1 00:26:58.131 13:50:37 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:58.131 13:50:37 -- common/autotest_common.sh@138 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:58.131 13:50:37 -- common/autotest_common.sh@140 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:58.131 13:50:37 -- common/autotest_common.sh@142 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:58.131 13:50:37 -- common/autotest_common.sh@144 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:58.131 13:50:37 -- common/autotest_common.sh@146 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:58.131 13:50:37 -- common/autotest_common.sh@148 -- # : 00:26:58.131 13:50:37 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:58.131 13:50:37 -- common/autotest_common.sh@150 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:58.131 13:50:37 -- common/autotest_common.sh@152 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:58.131 13:50:37 -- common/autotest_common.sh@154 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:58.131 13:50:37 -- common/autotest_common.sh@156 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:58.131 13:50:37 -- common/autotest_common.sh@158 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:58.131 13:50:37 -- common/autotest_common.sh@160 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:58.131 13:50:37 -- common/autotest_common.sh@163 -- # : 00:26:58.131 13:50:37 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:58.131 13:50:37 -- common/autotest_common.sh@165 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:58.131 13:50:37 -- common/autotest_common.sh@167 -- # : 0 00:26:58.131 13:50:37 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:58.131 13:50:37 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:58.131 13:50:37 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:58.131 13:50:37 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:58.131 13:50:37 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:58.131 13:50:37 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:58.131 13:50:37 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:58.131 13:50:37 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:58.131 13:50:37 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:58.131 13:50:37 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:58.132 13:50:37 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:58.132 13:50:37 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:58.132 13:50:37 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:58.132 13:50:37 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:58.132 13:50:37 -- common/autotest_common.sh@196 -- # cat 00:26:58.132 13:50:37 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:58.132 13:50:37 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:58.132 13:50:37 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:58.132 13:50:37 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:58.132 13:50:37 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:58.132 13:50:37 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:58.132 13:50:37 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:58.132 13:50:37 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:58.132 13:50:37 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:58.132 13:50:37 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:58.132 13:50:37 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:58.132 13:50:37 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:58.132 13:50:37 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:58.132 13:50:37 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:58.132 13:50:37 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:58.132 13:50:37 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:58.132 13:50:37 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:58.132 13:50:37 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:58.132 13:50:37 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:58.132 13:50:37 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:58.132 13:50:37 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:58.132 13:50:37 -- common/autotest_common.sh@249 -- # valgrind= 00:26:58.132 13:50:37 -- common/autotest_common.sh@255 -- # uname -s 00:26:58.132 13:50:37 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:58.132 13:50:37 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:58.132 13:50:37 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:58.132 13:50:37 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:58.132 13:50:37 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:58.132 13:50:37 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:58.132 13:50:37 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:58.132 13:50:37 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:58.132 13:50:37 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:58.132 13:50:37 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:58.132 13:50:37 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:58.132 13:50:37 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:58.132 13:50:37 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:58.132 13:50:37 -- common/autotest_common.sh@309 -- # [[ -z 136982 ]] 00:26:58.132 13:50:37 -- common/autotest_common.sh@309 -- # kill -0 136982 00:26:58.132 13:50:37 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:58.132 13:50:37 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:58.132 13:50:37 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:58.132 13:50:37 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:58.132 13:50:37 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:58.132 13:50:37 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:58.132 13:50:37 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:58.132 13:50:37 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:58.132 13:50:37 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.srE94G 00:26:58.132 13:50:37 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:58.132 13:50:37 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:58.132 13:50:37 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:58.132 13:50:37 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.srE94G/tests/interrupt /tmp/spdk.srE94G 00:26:58.132 13:50:37 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:58.132 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.132 13:50:37 -- common/autotest_common.sh@318 -- # df -T 00:26:58.132 13:50:37 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224457728 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224457728 00:26:58.132 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:58.132 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:58.132 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:58.132 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=10613739520 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:58.132 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=9986277376 00:26:58.132 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269964288 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:26:58.132 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:58.132 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:58.132 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:58.132 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272557056 00:26:58.132 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272557056 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=94329274368 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=5373505536 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:58.133 13:50:37 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:58.133 13:50:37 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:58.133 13:50:37 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:58.133 13:50:37 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:58.133 13:50:37 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:58.133 * Looking for test storage... 00:26:58.133 13:50:37 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:58.133 13:50:37 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:58.133 13:50:37 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.133 13:50:37 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:58.133 13:50:37 -- common/autotest_common.sh@363 -- # mount=/ 00:26:58.133 13:50:37 -- common/autotest_common.sh@365 -- # target_space=10613739520 00:26:58.133 13:50:37 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:58.133 13:50:37 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:58.133 13:50:37 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:58.133 13:50:37 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:58.133 13:50:37 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:58.133 13:50:37 -- common/autotest_common.sh@372 -- # new_size=12200869888 00:26:58.133 13:50:37 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:58.133 13:50:37 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.133 13:50:37 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.133 13:50:37 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:58.133 13:50:37 -- common/autotest_common.sh@380 -- # return 0 00:26:58.133 13:50:37 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:58.133 13:50:37 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:58.133 13:50:37 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:58.133 13:50:37 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:58.133 13:50:37 -- common/autotest_common.sh@1672 -- # true 00:26:58.133 13:50:37 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:58.133 13:50:37 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:58.133 13:50:37 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:58.133 13:50:37 -- common/autotest_common.sh@27 -- # exec 00:26:58.133 13:50:37 -- common/autotest_common.sh@29 -- # exec 00:26:58.133 13:50:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:58.133 13:50:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:58.133 13:50:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:58.133 13:50:37 -- common/autotest_common.sh@18 -- # set -x 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:58.133 13:50:37 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:58.133 13:50:37 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:58.133 13:50:37 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:58.133 13:50:37 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.134 13:50:37 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:58.134 13:50:37 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=137022 00:26:58.134 13:50:37 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:58.134 13:50:37 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:58.134 13:50:37 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 137022 /var/tmp/spdk.sock 00:26:58.134 13:50:37 -- common/autotest_common.sh@819 -- # '[' -z 137022 ']' 00:26:58.134 13:50:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.134 13:50:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.134 13:50:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.134 13:50:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.134 13:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:58.134 [2024-07-10 13:50:37.419965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:58.134 [2024-07-10 13:50:37.420543] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137022 ] 00:26:58.394 [2024-07-10 13:50:37.583104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.654 [2024-07-10 13:50:37.781171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.654 [2024-07-10 13:50:37.781393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.654 [2024-07-10 13:50:37.781608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.913 [2024-07-10 13:50:38.070506] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:58.913 13:50:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:58.913 13:50:38 -- common/autotest_common.sh@852 -- # return 0 00:26:58.913 13:50:38 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:58.913 13:50:38 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:58.913 13:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.913 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 13:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:59.172 "name": "app_thread", 00:26:59.172 "id": 1, 00:26:59.172 "active_pollers": [], 00:26:59.172 "timed_pollers": [ 00:26:59.172 { 00:26:59.172 "name": "rpc_subsystem_poll", 00:26:59.172 "id": 1, 00:26:59.172 "state": "waiting", 00:26:59.172 "run_count": 0, 00:26:59.172 "busy_count": 0, 00:26:59.172 "period_ticks": 9160000 00:26:59.172 } 00:26:59.172 ], 00:26:59.172 "paused_pollers": [] 00:26:59.172 }' 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:59.172 13:50:38 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:59.172 13:50:38 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:59.172 13:50:38 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:59.172 13:50:38 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:59.172 5000+0 records in 00:26:59.172 5000+0 records out 00:26:59.172 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0140055 s, 731 MB/s 00:26:59.172 13:50:38 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:59.431 AIO0 00:26:59.431 13:50:38 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:59.691 13:50:38 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:59.691 13:50:38 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:59.691 13:50:38 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:59.691 13:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.691 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 13:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.691 13:50:38 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:59.691 "name": "app_thread", 00:26:59.691 "id": 1, 00:26:59.691 "active_pollers": [], 00:26:59.691 "timed_pollers": [ 00:26:59.691 { 00:26:59.691 "name": "rpc_subsystem_poll", 00:26:59.691 "id": 1, 00:26:59.691 "state": "waiting", 00:26:59.691 "run_count": 0, 00:26:59.691 "busy_count": 0, 00:26:59.691 "period_ticks": 9160000 00:26:59.691 } 00:26:59.691 ], 00:26:59.691 "paused_pollers": [] 00:26:59.691 }' 00:26:59.691 13:50:38 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:59.691 13:50:39 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:59.691 13:50:39 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:59.691 13:50:39 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:59.951 13:50:39 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:59.951 13:50:39 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:59.951 13:50:39 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:59.951 13:50:39 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 137022 00:26:59.951 13:50:39 -- common/autotest_common.sh@926 -- # '[' -z 137022 ']' 00:26:59.951 13:50:39 -- common/autotest_common.sh@930 -- # kill -0 137022 00:26:59.951 13:50:39 -- common/autotest_common.sh@931 -- # uname 00:26:59.951 13:50:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:59.951 13:50:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137022 00:26:59.951 13:50:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:59.951 13:50:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:59.951 killing process with pid 137022 00:26:59.951 13:50:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137022' 00:26:59.951 13:50:39 -- common/autotest_common.sh@945 -- # kill 137022 00:26:59.951 13:50:39 -- common/autotest_common.sh@950 -- # wait 137022 00:27:01.329 13:50:40 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:27:01.329 13:50:40 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:27:01.329 ************************************ 00:27:01.329 END TEST reap_unregistered_poller 00:27:01.329 ************************************ 00:27:01.329 00:27:01.329 real 0m3.235s 00:27:01.329 user 0m2.723s 00:27:01.329 sys 0m0.519s 00:27:01.329 13:50:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.329 13:50:40 -- common/autotest_common.sh@10 -- # set +x 00:27:01.329 13:50:40 -- spdk/autotest.sh@204 -- # uname -s 00:27:01.329 13:50:40 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:27:01.329 13:50:40 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:27:01.329 13:50:40 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:27:01.329 13:50:40 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:27:01.329 13:50:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:01.329 13:50:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:01.329 13:50:40 -- common/autotest_common.sh@10 -- # set +x 00:27:01.329 ************************************ 00:27:01.329 START TEST spdk_dd 00:27:01.329 ************************************ 00:27:01.329 13:50:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:27:01.329 * Looking for test storage... 00:27:01.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:01.329 13:50:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:01.329 13:50:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.329 13:50:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.329 13:50:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.329 13:50:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:01.329 13:50:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:01.329 13:50:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:01.329 13:50:40 -- paths/export.sh@5 -- # export PATH 00:27:01.329 13:50:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:01.329 13:50:40 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:01.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:01.847 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:02.784 13:50:41 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:27:02.784 13:50:41 -- dd/dd.sh@11 -- # nvme_in_userspace 00:27:02.784 13:50:41 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:02.784 13:50:41 -- scripts/common.sh@312 -- # local nvmes 00:27:02.784 13:50:41 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:02.784 13:50:41 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:02.784 13:50:41 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:02.784 13:50:41 -- scripts/common.sh@297 -- # local bdf= 00:27:02.784 13:50:41 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:02.784 13:50:41 -- scripts/common.sh@232 -- # local class 00:27:02.784 13:50:41 -- scripts/common.sh@233 -- # local subclass 00:27:02.784 13:50:41 -- scripts/common.sh@234 -- # local progif 00:27:02.784 13:50:41 -- scripts/common.sh@235 -- # printf %02x 1 00:27:02.784 13:50:41 -- scripts/common.sh@235 -- # class=01 00:27:02.784 13:50:41 -- scripts/common.sh@236 -- # printf %02x 8 00:27:02.784 13:50:41 -- scripts/common.sh@236 -- # subclass=08 00:27:02.784 13:50:41 -- scripts/common.sh@237 -- # printf %02x 2 00:27:02.784 13:50:41 -- scripts/common.sh@237 -- # progif=02 00:27:02.784 13:50:41 -- scripts/common.sh@239 -- # hash lspci 00:27:02.784 13:50:41 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:02.784 13:50:41 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:02.784 13:50:41 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:02.784 13:50:41 -- scripts/common.sh@244 -- # tr -d '"' 00:27:02.784 13:50:41 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:02.784 13:50:41 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:02.784 13:50:41 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:02.784 13:50:41 -- scripts/common.sh@15 -- # local i 00:27:02.784 13:50:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:02.784 13:50:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:02.784 13:50:41 -- scripts/common.sh@24 -- # return 0 00:27:02.784 13:50:41 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:02.784 13:50:41 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:02.784 13:50:41 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:02.784 13:50:41 -- scripts/common.sh@322 -- # uname -s 00:27:02.784 13:50:41 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:02.784 13:50:41 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:02.784 13:50:41 -- scripts/common.sh@327 -- # (( 1 )) 00:27:02.784 13:50:41 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:27:02.784 13:50:41 -- dd/dd.sh@13 -- # check_liburing 00:27:02.784 13:50:41 -- dd/common.sh@139 -- # local lib so 00:27:02.784 13:50:41 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:27:02.784 13:50:41 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:27:02.784 13:50:41 -- dd/common.sh@142 -- # read -r lib _ so _ 00:27:02.784 13:50:41 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:27:02.784 13:50:41 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:27:02.784 13:50:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:02.784 13:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:02.784 13:50:41 -- common/autotest_common.sh@10 -- # set +x 00:27:02.784 ************************************ 00:27:02.784 START TEST spdk_dd_basic_rw 00:27:02.784 ************************************ 00:27:02.784 13:50:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:27:02.784 * Looking for test storage... 00:27:02.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:02.784 13:50:42 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:02.784 13:50:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.784 13:50:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.784 13:50:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.784 13:50:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.784 13:50:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.784 13:50:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.784 13:50:42 -- paths/export.sh@5 -- # export PATH 00:27:02.784 13:50:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:02.784 13:50:42 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:27:02.784 13:50:42 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:27:02.784 13:50:42 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:27:02.784 13:50:42 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:27:02.784 13:50:42 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:27:02.784 13:50:42 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:27:02.784 13:50:42 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:27:02.784 13:50:42 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:02.784 13:50:42 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:02.784 13:50:42 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:27:02.784 13:50:42 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:27:02.784 13:50:42 -- dd/common.sh@126 -- # mapfile -t id 00:27:02.784 13:50:42 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:27:03.046 13:50:42 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2343 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:27:03.046 13:50:42 -- dd/common.sh@130 -- # lbaf=04 00:27:03.046 13:50:42 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 113 Data Units Written: 7 Host Read Commands: 2343 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:27:03.046 13:50:42 -- dd/common.sh@132 -- # lbaf=4096 00:27:03.046 13:50:42 -- dd/common.sh@134 -- # echo 4096 00:27:03.046 13:50:42 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:27:03.046 13:50:42 -- dd/basic_rw.sh@96 -- # : 00:27:03.046 13:50:42 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:03.046 13:50:42 -- dd/basic_rw.sh@96 -- # gen_conf 00:27:03.046 13:50:42 -- dd/common.sh@31 -- # xtrace_disable 00:27:03.046 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.046 13:50:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:27:03.046 13:50:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.046 13:50:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.046 ************************************ 00:27:03.046 START TEST dd_bs_lt_native_bs 00:27:03.046 ************************************ 00:27:03.046 13:50:42 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:03.046 13:50:42 -- common/autotest_common.sh@640 -- # local es=0 00:27:03.046 13:50:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:03.046 13:50:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.046 13:50:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:03.047 13:50:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.047 13:50:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:03.047 13:50:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.047 13:50:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:03.047 13:50:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.047 13:50:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.047 13:50:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:27:03.047 { 00:27:03.047 "subsystems": [ 00:27:03.047 { 00:27:03.047 "subsystem": "bdev", 00:27:03.047 "config": [ 00:27:03.047 { 00:27:03.047 "params": { 00:27:03.047 "trtype": "pcie", 00:27:03.047 "traddr": "0000:00:06.0", 00:27:03.047 "name": "Nvme0" 00:27:03.047 }, 00:27:03.047 "method": "bdev_nvme_attach_controller" 00:27:03.047 }, 00:27:03.047 { 00:27:03.047 "method": "bdev_wait_for_examine" 00:27:03.047 } 00:27:03.047 ] 00:27:03.047 } 00:27:03.047 ] 00:27:03.047 } 00:27:03.047 [2024-07-10 13:50:42.395259] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:03.047 [2024-07-10 13:50:42.395404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137344 ] 00:27:03.306 [2024-07-10 13:50:42.555104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.565 [2024-07-10 13:50:42.777832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.128 [2024-07-10 13:50:43.203803] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:27:04.128 [2024-07-10 13:50:43.203890] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:05.066 [2024-07-10 13:50:44.075478] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:05.323 ************************************ 00:27:05.323 END TEST dd_bs_lt_native_bs 00:27:05.323 ************************************ 00:27:05.323 13:50:44 -- common/autotest_common.sh@643 -- # es=234 00:27:05.323 13:50:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:05.323 13:50:44 -- common/autotest_common.sh@652 -- # es=106 00:27:05.323 13:50:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:05.323 13:50:44 -- common/autotest_common.sh@660 -- # es=1 00:27:05.323 13:50:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:05.323 00:27:05.323 real 0m2.189s 00:27:05.323 user 0m1.945s 00:27:05.323 sys 0m0.209s 00:27:05.323 13:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.323 13:50:44 -- common/autotest_common.sh@10 -- # set +x 00:27:05.323 13:50:44 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:27:05.323 13:50:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:05.323 13:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.323 13:50:44 -- common/autotest_common.sh@10 -- # set +x 00:27:05.323 ************************************ 00:27:05.323 START TEST dd_rw 00:27:05.323 ************************************ 00:27:05.323 13:50:44 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:27:05.323 13:50:44 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:27:05.323 13:50:44 -- dd/basic_rw.sh@12 -- # local count size 00:27:05.323 13:50:44 -- dd/basic_rw.sh@13 -- # local qds bss 00:27:05.323 13:50:44 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:27:05.323 13:50:44 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:05.323 13:50:44 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:05.323 13:50:44 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:05.323 13:50:44 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:05.323 13:50:44 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:27:05.323 13:50:44 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:27:05.323 13:50:44 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:05.323 13:50:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:05.323 13:50:44 -- dd/basic_rw.sh@23 -- # count=15 00:27:05.323 13:50:44 -- dd/basic_rw.sh@24 -- # count=15 00:27:05.323 13:50:44 -- dd/basic_rw.sh@25 -- # size=61440 00:27:05.323 13:50:44 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:05.323 13:50:44 -- dd/common.sh@98 -- # xtrace_disable 00:27:05.323 13:50:44 -- common/autotest_common.sh@10 -- # set +x 00:27:05.903 13:50:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:27:05.903 13:50:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:05.903 13:50:44 -- dd/common.sh@31 -- # xtrace_disable 00:27:05.903 13:50:44 -- common/autotest_common.sh@10 -- # set +x 00:27:05.903 [2024-07-10 13:50:45.055542] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:05.903 [2024-07-10 13:50:45.055661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137404 ] 00:27:05.903 { 00:27:05.903 "subsystems": [ 00:27:05.903 { 00:27:05.903 "subsystem": "bdev", 00:27:05.903 "config": [ 00:27:05.903 { 00:27:05.903 "params": { 00:27:05.903 "trtype": "pcie", 00:27:05.903 "traddr": "0000:00:06.0", 00:27:05.903 "name": "Nvme0" 00:27:05.903 }, 00:27:05.903 "method": "bdev_nvme_attach_controller" 00:27:05.903 }, 00:27:05.903 { 00:27:05.903 "method": "bdev_wait_for_examine" 00:27:05.903 } 00:27:05.903 ] 00:27:05.903 } 00:27:05.903 ] 00:27:05.903 } 00:27:05.903 [2024-07-10 13:50:45.203615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.162 [2024-07-10 13:50:45.409151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.107  Copying: 60/60 [kB] (average 19 MBps) 00:27:08.107 00:27:08.107 13:50:47 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:27:08.107 13:50:47 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:08.107 13:50:47 -- dd/common.sh@31 -- # xtrace_disable 00:27:08.107 13:50:47 -- common/autotest_common.sh@10 -- # set +x 00:27:08.107 [2024-07-10 13:50:47.085509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:08.107 [2024-07-10 13:50:47.085701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137438 ] 00:27:08.107 { 00:27:08.107 "subsystems": [ 00:27:08.107 { 00:27:08.107 "subsystem": "bdev", 00:27:08.107 "config": [ 00:27:08.107 { 00:27:08.107 "params": { 00:27:08.107 "trtype": "pcie", 00:27:08.107 "traddr": "0000:00:06.0", 00:27:08.107 "name": "Nvme0" 00:27:08.107 }, 00:27:08.107 "method": "bdev_nvme_attach_controller" 00:27:08.107 }, 00:27:08.107 { 00:27:08.107 "method": "bdev_wait_for_examine" 00:27:08.107 } 00:27:08.107 ] 00:27:08.107 } 00:27:08.107 ] 00:27:08.107 } 00:27:08.107 [2024-07-10 13:50:47.256851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.366 [2024-07-10 13:50:47.470943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.000  Copying: 60/60 [kB] (average 19 MBps) 00:27:10.000 00:27:10.000 13:50:49 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:10.000 13:50:49 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:10.000 13:50:49 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:10.000 13:50:49 -- dd/common.sh@11 -- # local nvme_ref= 00:27:10.000 13:50:49 -- dd/common.sh@12 -- # local size=61440 00:27:10.000 13:50:49 -- dd/common.sh@14 -- # local bs=1048576 00:27:10.000 13:50:49 -- dd/common.sh@15 -- # local count=1 00:27:10.000 13:50:49 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:10.000 13:50:49 -- dd/common.sh@18 -- # gen_conf 00:27:10.001 13:50:49 -- dd/common.sh@31 -- # xtrace_disable 00:27:10.001 13:50:49 -- common/autotest_common.sh@10 -- # set +x 00:27:10.001 [2024-07-10 13:50:49.329915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:10.001 [2024-07-10 13:50:49.330073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137484 ] 00:27:10.001 { 00:27:10.001 "subsystems": [ 00:27:10.001 { 00:27:10.001 "subsystem": "bdev", 00:27:10.001 "config": [ 00:27:10.001 { 00:27:10.001 "params": { 00:27:10.001 "trtype": "pcie", 00:27:10.001 "traddr": "0000:00:06.0", 00:27:10.001 "name": "Nvme0" 00:27:10.001 }, 00:27:10.001 "method": "bdev_nvme_attach_controller" 00:27:10.001 }, 00:27:10.001 { 00:27:10.001 "method": "bdev_wait_for_examine" 00:27:10.001 } 00:27:10.001 ] 00:27:10.001 } 00:27:10.001 ] 00:27:10.001 } 00:27:10.259 [2024-07-10 13:50:49.480551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.516 [2024-07-10 13:50:49.694229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.154  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:12.154 00:27:12.154 13:50:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:12.154 13:50:51 -- dd/basic_rw.sh@23 -- # count=15 00:27:12.154 13:50:51 -- dd/basic_rw.sh@24 -- # count=15 00:27:12.154 13:50:51 -- dd/basic_rw.sh@25 -- # size=61440 00:27:12.154 13:50:51 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:12.154 13:50:51 -- dd/common.sh@98 -- # xtrace_disable 00:27:12.154 13:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.724 13:50:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:27:12.724 13:50:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:12.724 13:50:51 -- dd/common.sh@31 -- # xtrace_disable 00:27:12.724 13:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.724 [2024-07-10 13:50:51.839232] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:12.724 [2024-07-10 13:50:51.839373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137516 ] 00:27:12.724 { 00:27:12.724 "subsystems": [ 00:27:12.724 { 00:27:12.724 "subsystem": "bdev", 00:27:12.724 "config": [ 00:27:12.724 { 00:27:12.724 "params": { 00:27:12.724 "trtype": "pcie", 00:27:12.724 "traddr": "0000:00:06.0", 00:27:12.724 "name": "Nvme0" 00:27:12.724 }, 00:27:12.724 "method": "bdev_nvme_attach_controller" 00:27:12.724 }, 00:27:12.724 { 00:27:12.724 "method": "bdev_wait_for_examine" 00:27:12.724 } 00:27:12.724 ] 00:27:12.724 } 00:27:12.724 ] 00:27:12.724 } 00:27:12.724 [2024-07-10 13:50:51.989763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.983 [2024-07-10 13:50:52.203877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.932  Copying: 60/60 [kB] (average 58 MBps) 00:27:14.932 00:27:14.932 13:50:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:27:14.932 13:50:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:14.932 13:50:53 -- dd/common.sh@31 -- # xtrace_disable 00:27:14.932 13:50:53 -- common/autotest_common.sh@10 -- # set +x 00:27:14.932 [2024-07-10 13:50:54.018703] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:14.932 [2024-07-10 13:50:54.018831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137549 ] 00:27:14.932 { 00:27:14.932 "subsystems": [ 00:27:14.932 { 00:27:14.932 "subsystem": "bdev", 00:27:14.932 "config": [ 00:27:14.932 { 00:27:14.932 "params": { 00:27:14.932 "trtype": "pcie", 00:27:14.932 "traddr": "0000:00:06.0", 00:27:14.932 "name": "Nvme0" 00:27:14.932 }, 00:27:14.932 "method": "bdev_nvme_attach_controller" 00:27:14.932 }, 00:27:14.932 { 00:27:14.932 "method": "bdev_wait_for_examine" 00:27:14.932 } 00:27:14.933 ] 00:27:14.933 } 00:27:14.933 ] 00:27:14.933 } 00:27:14.933 [2024-07-10 13:50:54.167998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.192 [2024-07-10 13:50:54.379691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.701  Copying: 60/60 [kB] (average 58 MBps) 00:27:16.701 00:27:16.701 13:50:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:16.701 13:50:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:16.701 13:50:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:16.701 13:50:56 -- dd/common.sh@11 -- # local nvme_ref= 00:27:16.701 13:50:56 -- dd/common.sh@12 -- # local size=61440 00:27:16.701 13:50:56 -- dd/common.sh@14 -- # local bs=1048576 00:27:16.701 13:50:56 -- dd/common.sh@15 -- # local count=1 00:27:16.701 13:50:56 -- dd/common.sh@18 -- # gen_conf 00:27:16.701 13:50:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:16.701 13:50:56 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.701 13:50:56 -- common/autotest_common.sh@10 -- # set +x 00:27:16.961 [2024-07-10 13:50:56.112069] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:16.961 [2024-07-10 13:50:56.112231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137587 ] 00:27:16.961 { 00:27:16.961 "subsystems": [ 00:27:16.961 { 00:27:16.961 "subsystem": "bdev", 00:27:16.961 "config": [ 00:27:16.961 { 00:27:16.961 "params": { 00:27:16.961 "trtype": "pcie", 00:27:16.961 "traddr": "0000:00:06.0", 00:27:16.961 "name": "Nvme0" 00:27:16.961 }, 00:27:16.961 "method": "bdev_nvme_attach_controller" 00:27:16.961 }, 00:27:16.961 { 00:27:16.961 "method": "bdev_wait_for_examine" 00:27:16.961 } 00:27:16.961 ] 00:27:16.961 } 00:27:16.961 ] 00:27:16.961 } 00:27:16.961 [2024-07-10 13:50:56.271024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.221 [2024-07-10 13:50:56.484044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.169  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:19.169 00:27:19.169 13:50:58 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:19.169 13:50:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:19.169 13:50:58 -- dd/basic_rw.sh@23 -- # count=7 00:27:19.169 13:50:58 -- dd/basic_rw.sh@24 -- # count=7 00:27:19.169 13:50:58 -- dd/basic_rw.sh@25 -- # size=57344 00:27:19.169 13:50:58 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:19.170 13:50:58 -- dd/common.sh@98 -- # xtrace_disable 00:27:19.170 13:50:58 -- common/autotest_common.sh@10 -- # set +x 00:27:19.429 13:50:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:27:19.429 13:50:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:19.429 13:50:58 -- dd/common.sh@31 -- # xtrace_disable 00:27:19.429 13:50:58 -- common/autotest_common.sh@10 -- # set +x 00:27:19.429 [2024-07-10 13:50:58.721431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:19.429 [2024-07-10 13:50:58.721563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137619 ] 00:27:19.429 { 00:27:19.429 "subsystems": [ 00:27:19.429 { 00:27:19.429 "subsystem": "bdev", 00:27:19.429 "config": [ 00:27:19.429 { 00:27:19.429 "params": { 00:27:19.429 "trtype": "pcie", 00:27:19.429 "traddr": "0000:00:06.0", 00:27:19.429 "name": "Nvme0" 00:27:19.429 }, 00:27:19.429 "method": "bdev_nvme_attach_controller" 00:27:19.429 }, 00:27:19.429 { 00:27:19.429 "method": "bdev_wait_for_examine" 00:27:19.429 } 00:27:19.429 ] 00:27:19.429 } 00:27:19.429 ] 00:27:19.429 } 00:27:19.689 [2024-07-10 13:50:58.869486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.949 [2024-07-10 13:50:59.086802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.589  Copying: 56/56 [kB] (average 54 MBps) 00:27:21.589 00:27:21.589 13:51:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:27:21.589 13:51:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:21.589 13:51:00 -- dd/common.sh@31 -- # xtrace_disable 00:27:21.589 13:51:00 -- common/autotest_common.sh@10 -- # set +x 00:27:21.589 [2024-07-10 13:51:00.804490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:21.589 [2024-07-10 13:51:00.804648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137661 ] 00:27:21.589 { 00:27:21.589 "subsystems": [ 00:27:21.589 { 00:27:21.589 "subsystem": "bdev", 00:27:21.589 "config": [ 00:27:21.589 { 00:27:21.589 "params": { 00:27:21.590 "trtype": "pcie", 00:27:21.590 "traddr": "0000:00:06.0", 00:27:21.590 "name": "Nvme0" 00:27:21.590 }, 00:27:21.590 "method": "bdev_nvme_attach_controller" 00:27:21.590 }, 00:27:21.590 { 00:27:21.590 "method": "bdev_wait_for_examine" 00:27:21.590 } 00:27:21.590 ] 00:27:21.590 } 00:27:21.590 ] 00:27:21.590 } 00:27:21.849 [2024-07-10 13:51:00.959284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.849 [2024-07-10 13:51:01.179735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.803  Copying: 56/56 [kB] (average 27 MBps) 00:27:23.803 00:27:23.803 13:51:03 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:23.803 13:51:03 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:23.803 13:51:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:23.803 13:51:03 -- dd/common.sh@11 -- # local nvme_ref= 00:27:23.803 13:51:03 -- dd/common.sh@12 -- # local size=57344 00:27:23.803 13:51:03 -- dd/common.sh@14 -- # local bs=1048576 00:27:23.803 13:51:03 -- dd/common.sh@15 -- # local count=1 00:27:23.803 13:51:03 -- dd/common.sh@18 -- # gen_conf 00:27:23.803 13:51:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:23.803 13:51:03 -- dd/common.sh@31 -- # xtrace_disable 00:27:23.803 13:51:03 -- common/autotest_common.sh@10 -- # set +x 00:27:23.803 { 00:27:23.803 "subsystems": [ 00:27:23.803 { 00:27:23.803 "subsystem": "bdev", 00:27:23.803 "config": [ 00:27:23.803 { 00:27:23.803 "params": { 00:27:23.803 "trtype": "pcie", 00:27:23.803 "traddr": "0000:00:06.0", 00:27:23.803 "name": "Nvme0" 00:27:23.803 }, 00:27:23.803 "method": "bdev_nvme_attach_controller" 00:27:23.803 }, 00:27:23.803 { 00:27:23.803 "method": "bdev_wait_for_examine" 00:27:23.803 } 00:27:23.803 ] 00:27:23.803 } 00:27:23.803 ] 00:27:23.803 } 00:27:23.803 [2024-07-10 13:51:03.087300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:23.803 [2024-07-10 13:51:03.087427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137694 ] 00:27:24.064 [2024-07-10 13:51:03.244358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.322 [2024-07-10 13:51:03.466743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.961  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:25.961 00:27:25.961 13:51:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:25.961 13:51:05 -- dd/basic_rw.sh@23 -- # count=7 00:27:25.961 13:51:05 -- dd/basic_rw.sh@24 -- # count=7 00:27:25.961 13:51:05 -- dd/basic_rw.sh@25 -- # size=57344 00:27:25.961 13:51:05 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:25.961 13:51:05 -- dd/common.sh@98 -- # xtrace_disable 00:27:25.961 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:27:26.529 13:51:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:27:26.529 13:51:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:26.529 13:51:05 -- dd/common.sh@31 -- # xtrace_disable 00:27:26.529 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:27:26.529 [2024-07-10 13:51:05.663001] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:26.529 [2024-07-10 13:51:05.663130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137733 ] 00:27:26.529 { 00:27:26.529 "subsystems": [ 00:27:26.529 { 00:27:26.529 "subsystem": "bdev", 00:27:26.529 "config": [ 00:27:26.529 { 00:27:26.529 "params": { 00:27:26.529 "trtype": "pcie", 00:27:26.529 "traddr": "0000:00:06.0", 00:27:26.529 "name": "Nvme0" 00:27:26.529 }, 00:27:26.529 "method": "bdev_nvme_attach_controller" 00:27:26.529 }, 00:27:26.529 { 00:27:26.529 "method": "bdev_wait_for_examine" 00:27:26.529 } 00:27:26.529 ] 00:27:26.529 } 00:27:26.529 ] 00:27:26.529 } 00:27:26.529 [2024-07-10 13:51:05.822357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.789 [2024-07-10 13:51:06.042736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.739  Copying: 56/56 [kB] (average 54 MBps) 00:27:28.739 00:27:28.739 13:51:07 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:27:28.739 13:51:07 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:28.739 13:51:07 -- dd/common.sh@31 -- # xtrace_disable 00:27:28.739 13:51:07 -- common/autotest_common.sh@10 -- # set +x 00:27:28.739 [2024-07-10 13:51:07.912679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:28.739 [2024-07-10 13:51:07.912820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137765 ] 00:27:28.739 { 00:27:28.739 "subsystems": [ 00:27:28.739 { 00:27:28.739 "subsystem": "bdev", 00:27:28.739 "config": [ 00:27:28.739 { 00:27:28.739 "params": { 00:27:28.739 "trtype": "pcie", 00:27:28.739 "traddr": "0000:00:06.0", 00:27:28.739 "name": "Nvme0" 00:27:28.739 }, 00:27:28.739 "method": "bdev_nvme_attach_controller" 00:27:28.739 }, 00:27:28.739 { 00:27:28.739 "method": "bdev_wait_for_examine" 00:27:28.739 } 00:27:28.739 ] 00:27:28.739 } 00:27:28.739 ] 00:27:28.739 } 00:27:28.739 [2024-07-10 13:51:08.071266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.996 [2024-07-10 13:51:08.294745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.935  Copying: 56/56 [kB] (average 54 MBps) 00:27:30.935 00:27:30.935 13:51:09 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:30.935 13:51:09 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:30.935 13:51:09 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:30.935 13:51:09 -- dd/common.sh@11 -- # local nvme_ref= 00:27:30.935 13:51:09 -- dd/common.sh@12 -- # local size=57344 00:27:30.935 13:51:09 -- dd/common.sh@14 -- # local bs=1048576 00:27:30.935 13:51:09 -- dd/common.sh@15 -- # local count=1 00:27:30.935 13:51:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:30.935 13:51:09 -- dd/common.sh@18 -- # gen_conf 00:27:30.935 13:51:09 -- dd/common.sh@31 -- # xtrace_disable 00:27:30.935 13:51:09 -- common/autotest_common.sh@10 -- # set +x 00:27:30.935 [2024-07-10 13:51:10.031251] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:30.935 [2024-07-10 13:51:10.031453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137822 ] 00:27:30.935 { 00:27:30.935 "subsystems": [ 00:27:30.935 { 00:27:30.935 "subsystem": "bdev", 00:27:30.935 "config": [ 00:27:30.935 { 00:27:30.935 "params": { 00:27:30.935 "trtype": "pcie", 00:27:30.935 "traddr": "0000:00:06.0", 00:27:30.935 "name": "Nvme0" 00:27:30.935 }, 00:27:30.935 "method": "bdev_nvme_attach_controller" 00:27:30.935 }, 00:27:30.935 { 00:27:30.935 "method": "bdev_wait_for_examine" 00:27:30.935 } 00:27:30.935 ] 00:27:30.935 } 00:27:30.935 ] 00:27:30.935 } 00:27:30.935 [2024-07-10 13:51:10.202275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.195 [2024-07-10 13:51:10.429796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.143  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:33.143 00:27:33.143 13:51:12 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:33.143 13:51:12 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:33.143 13:51:12 -- dd/basic_rw.sh@23 -- # count=3 00:27:33.143 13:51:12 -- dd/basic_rw.sh@24 -- # count=3 00:27:33.143 13:51:12 -- dd/basic_rw.sh@25 -- # size=49152 00:27:33.143 13:51:12 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:33.143 13:51:12 -- dd/common.sh@98 -- # xtrace_disable 00:27:33.143 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:27:33.402 13:51:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:33.402 13:51:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:33.402 13:51:12 -- dd/common.sh@31 -- # xtrace_disable 00:27:33.402 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:27:33.402 [2024-07-10 13:51:12.672050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:33.402 [2024-07-10 13:51:12.672204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137861 ] 00:27:33.402 { 00:27:33.402 "subsystems": [ 00:27:33.402 { 00:27:33.402 "subsystem": "bdev", 00:27:33.402 "config": [ 00:27:33.402 { 00:27:33.402 "params": { 00:27:33.402 "trtype": "pcie", 00:27:33.402 "traddr": "0000:00:06.0", 00:27:33.402 "name": "Nvme0" 00:27:33.402 }, 00:27:33.402 "method": "bdev_nvme_attach_controller" 00:27:33.402 }, 00:27:33.402 { 00:27:33.402 "method": "bdev_wait_for_examine" 00:27:33.402 } 00:27:33.402 ] 00:27:33.402 } 00:27:33.402 ] 00:27:33.402 } 00:27:33.661 [2024-07-10 13:51:12.832962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.918 [2024-07-10 13:51:13.053697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.557  Copying: 48/48 [kB] (average 46 MBps) 00:27:35.557 00:27:35.557 13:51:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:35.557 13:51:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:35.557 13:51:14 -- dd/common.sh@31 -- # xtrace_disable 00:27:35.557 13:51:14 -- common/autotest_common.sh@10 -- # set +x 00:27:35.557 [2024-07-10 13:51:14.790020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:35.557 [2024-07-10 13:51:14.790150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137893 ] 00:27:35.557 { 00:27:35.557 "subsystems": [ 00:27:35.557 { 00:27:35.557 "subsystem": "bdev", 00:27:35.557 "config": [ 00:27:35.557 { 00:27:35.557 "params": { 00:27:35.557 "trtype": "pcie", 00:27:35.557 "traddr": "0000:00:06.0", 00:27:35.557 "name": "Nvme0" 00:27:35.557 }, 00:27:35.557 "method": "bdev_nvme_attach_controller" 00:27:35.557 }, 00:27:35.557 { 00:27:35.557 "method": "bdev_wait_for_examine" 00:27:35.557 } 00:27:35.557 ] 00:27:35.557 } 00:27:35.557 ] 00:27:35.557 } 00:27:35.815 [2024-07-10 13:51:14.943996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.815 [2024-07-10 13:51:15.166160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.758  Copying: 48/48 [kB] (average 46 MBps) 00:27:37.758 00:27:37.758 13:51:16 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:37.758 13:51:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:37.758 13:51:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:37.758 13:51:17 -- dd/common.sh@11 -- # local nvme_ref= 00:27:37.758 13:51:17 -- dd/common.sh@12 -- # local size=49152 00:27:37.758 13:51:17 -- dd/common.sh@14 -- # local bs=1048576 00:27:37.759 13:51:17 -- dd/common.sh@15 -- # local count=1 00:27:37.759 13:51:17 -- dd/common.sh@18 -- # gen_conf 00:27:37.759 13:51:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:37.759 13:51:17 -- dd/common.sh@31 -- # xtrace_disable 00:27:37.759 13:51:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.759 [2024-07-10 13:51:17.057786] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:37.759 [2024-07-10 13:51:17.057963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137926 ] 00:27:37.759 { 00:27:37.759 "subsystems": [ 00:27:37.759 { 00:27:37.759 "subsystem": "bdev", 00:27:37.759 "config": [ 00:27:37.759 { 00:27:37.759 "params": { 00:27:37.759 "trtype": "pcie", 00:27:37.759 "traddr": "0000:00:06.0", 00:27:37.759 "name": "Nvme0" 00:27:37.759 }, 00:27:37.759 "method": "bdev_nvme_attach_controller" 00:27:37.759 }, 00:27:37.759 { 00:27:37.759 "method": "bdev_wait_for_examine" 00:27:37.759 } 00:27:37.759 ] 00:27:37.759 } 00:27:37.759 ] 00:27:37.759 } 00:27:38.048 [2024-07-10 13:51:17.215249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.307 [2024-07-10 13:51:17.428678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.943  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:39.943 00:27:39.943 13:51:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:39.943 13:51:19 -- dd/basic_rw.sh@23 -- # count=3 00:27:39.943 13:51:19 -- dd/basic_rw.sh@24 -- # count=3 00:27:39.943 13:51:19 -- dd/basic_rw.sh@25 -- # size=49152 00:27:39.943 13:51:19 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:39.943 13:51:19 -- dd/common.sh@98 -- # xtrace_disable 00:27:39.943 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:40.203 13:51:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:27:40.203 13:51:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:40.203 13:51:19 -- dd/common.sh@31 -- # xtrace_disable 00:27:40.203 13:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:40.462 [2024-07-10 13:51:19.596917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:40.462 [2024-07-10 13:51:19.597414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137978 ] 00:27:40.462 { 00:27:40.462 "subsystems": [ 00:27:40.462 { 00:27:40.462 "subsystem": "bdev", 00:27:40.462 "config": [ 00:27:40.462 { 00:27:40.462 "params": { 00:27:40.462 "trtype": "pcie", 00:27:40.462 "traddr": "0000:00:06.0", 00:27:40.462 "name": "Nvme0" 00:27:40.462 }, 00:27:40.462 "method": "bdev_nvme_attach_controller" 00:27:40.463 }, 00:27:40.463 { 00:27:40.463 "method": "bdev_wait_for_examine" 00:27:40.463 } 00:27:40.463 ] 00:27:40.463 } 00:27:40.463 ] 00:27:40.463 } 00:27:40.463 [2024-07-10 13:51:19.758304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.722 [2024-07-10 13:51:19.983812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.671  Copying: 48/48 [kB] (average 46 MBps) 00:27:42.671 00:27:42.671 13:51:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:42.671 13:51:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:42.671 13:51:21 -- dd/common.sh@31 -- # xtrace_disable 00:27:42.671 13:51:21 -- common/autotest_common.sh@10 -- # set +x 00:27:42.671 [2024-07-10 13:51:21.911940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:42.671 [2024-07-10 13:51:21.912097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138010 ] 00:27:42.671 { 00:27:42.671 "subsystems": [ 00:27:42.671 { 00:27:42.671 "subsystem": "bdev", 00:27:42.671 "config": [ 00:27:42.671 { 00:27:42.671 "params": { 00:27:42.671 "trtype": "pcie", 00:27:42.671 "traddr": "0000:00:06.0", 00:27:42.671 "name": "Nvme0" 00:27:42.671 }, 00:27:42.671 "method": "bdev_nvme_attach_controller" 00:27:42.671 }, 00:27:42.671 { 00:27:42.671 "method": "bdev_wait_for_examine" 00:27:42.671 } 00:27:42.671 ] 00:27:42.671 } 00:27:42.671 ] 00:27:42.671 } 00:27:42.929 [2024-07-10 13:51:22.064919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.188 [2024-07-10 13:51:22.283321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.825  Copying: 48/48 [kB] (average 46 MBps) 00:27:44.825 00:27:44.825 13:51:24 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:44.825 13:51:24 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:44.825 13:51:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:44.825 13:51:24 -- dd/common.sh@11 -- # local nvme_ref= 00:27:44.825 13:51:24 -- dd/common.sh@12 -- # local size=49152 00:27:44.825 13:51:24 -- dd/common.sh@14 -- # local bs=1048576 00:27:44.825 13:51:24 -- dd/common.sh@15 -- # local count=1 00:27:44.825 13:51:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:44.825 13:51:24 -- dd/common.sh@18 -- # gen_conf 00:27:44.825 13:51:24 -- dd/common.sh@31 -- # xtrace_disable 00:27:44.825 13:51:24 -- common/autotest_common.sh@10 -- # set +x 00:27:44.825 [2024-07-10 13:51:24.089500] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:44.825 [2024-07-10 13:51:24.089620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138042 ] 00:27:44.825 { 00:27:44.825 "subsystems": [ 00:27:44.825 { 00:27:44.825 "subsystem": "bdev", 00:27:44.825 "config": [ 00:27:44.825 { 00:27:44.825 "params": { 00:27:44.825 "trtype": "pcie", 00:27:44.825 "traddr": "0000:00:06.0", 00:27:44.825 "name": "Nvme0" 00:27:44.825 }, 00:27:44.825 "method": "bdev_nvme_attach_controller" 00:27:44.825 }, 00:27:44.825 { 00:27:44.825 "method": "bdev_wait_for_examine" 00:27:44.825 } 00:27:44.825 ] 00:27:44.825 } 00:27:44.825 ] 00:27:44.825 } 00:27:45.084 [2024-07-10 13:51:24.239711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.343 [2024-07-10 13:51:24.464550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.979  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:46.979 00:27:47.238 ************************************ 00:27:47.238 END TEST dd_rw 00:27:47.238 ************************************ 00:27:47.238 00:27:47.238 real 0m41.779s 00:27:47.238 user 0m36.780s 00:27:47.238 sys 0m3.865s 00:27:47.238 13:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.238 13:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:47.238 13:51:26 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:47.238 13:51:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.238 13:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.238 13:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:47.238 ************************************ 00:27:47.238 START TEST dd_rw_offset 00:27:47.238 ************************************ 00:27:47.238 13:51:26 -- common/autotest_common.sh@1104 -- # basic_offset 00:27:47.238 13:51:26 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:47.238 13:51:26 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:47.238 13:51:26 -- dd/common.sh@98 -- # xtrace_disable 00:27:47.238 13:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:47.238 13:51:26 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:47.239 13:51:26 -- dd/basic_rw.sh@56 -- # data=r3h5r0ebqi3h8nws356qr9sljoazpz80cgxufc0o2mk0gsgq08tp2c1mv9hwktkdg8jo93g0z042l4k8dzjzf67tg23rh06hf5a2b91bhfyee3x99sevghzgsmhmbwcj2vb8zmho0by1mofc9bdav9r48qbea0v2dc2wh1wl1w9kacb0rua0dpm7i8r99bm1el89pk54vgvil9p1habtdnh3j6fe88eglk8uo99bqgcsuedk6ewe8as03oa2m5enk3g9tlbfemuireog8cs1556ntvi3bxu2tfx563mtbaxc9gzp5rwvpykj6evjg81tjyz43jhf4lujd16g864z52oqcbrwnyzod8e5zwrzkdz85krrqgn572kh5h22ze8wwpiajp0f4iogqtnqv6bek0l4xa1owcmm8i6z283fmzgcq6gtxt9z2p53jprclhh3n2r5oqkranwyso3lixjkwyu8vbkpl71sopxn0b9ltvgqd4p9y61lguowbfl6tsvq54pg2yh1yfehguycz8xe6y2xtdbtnfn0z19hnsvco4urgzo4wnpdu5g9xrz6a6gs4osuewantbbjxrt4b83dmxiqml4c9u8o5rl0ilsx61lnnfqvtmne6n4bqqijgqta3susr7u01rx5a9jdc7ff792j9hwtqzd6xfkn9vkkdqv5bgq9kqo9kwwdwq02hksf60npe00o6tjdbvuz0jesmpwm41c1c025s4kf2elg3yq35dvit1tututy40vvrbg1il30t8ljkf1pmk7jwgpclundxb6bovwf5y7z2eh4gy6epxonr6ldl276tap0t4in3qg08it7hkrl5gvqfl06hjo1kjhkmha120okzx5xkchd90cz6ajrj2pl0h390gl781vkr0o5wjjvx9jq7t4yrgv20n20rikyfcy7kt2n1225702j14wj9hh29sjujd6irx04ffr2a214ayi9fhite0rqrjyydkksieh4b51xz0gtfgretpvysniquzfskcruvzy2dvrjiudt8l2a4nejlckbn51sl9l2ux793cqn48f7ej6yiwof7i8q4n39tytkmcfh5f4pshd8x5bgpo84nl5hhc5o5r2upfy8aislcmn26v1osszjtg3xotda7oxxpcf80qsc3io3wjbe96t3nrxpn8iufg926rp3ljj6ykqhbhtxapfzmy16tm3sjbcr4v2yn8ip0xlb1nhixrj7ve0u9o3ezc1wib248xnww049keh67bb3s02aosb79j2wbmwucft039qf9428qyhb8t68wqalfw5tcwqycsno1azlpzwj013k01wg5bancyzhgzcgui9aygglkntcq5xxa5p1r500u9iz5t3ngln9torehdjnfdt392gqzgtlsk7gkd9tgsk7lnq9p4g9jnkphcf8aiupi5p5houa8mc0b57w13ts0am3t8xkv814usezhiq0qan4eodgvoph2ou8xwd8ceu4mxfy8onszmh3eo62oty2vmi76qcmrudwjr2v5wl2dnogyhohxqk3j5vfvfbxilx3dx5hbrupdgpc340g8xmli8opbl4dd1g1yhiywjetq5rtu7bvt1uqwqqdbeqrz3y6o3ccga54db35kmgpiwffxi7xt6aeg9171dtha3n1v7tmvnp7vuzs3dbp3c9we9rdxk3omwdh8zzw2pw8f4ns5o6zeatqpkv3wultywmus8o045moobznv8rcqdvbkrlxt9j6fpbd5e1pbac4xxkbatnjitsg2tw8j9c8imcowtf00tvhakthcuydpvodcl6agrk5gfgu1preobw8cmse8y2vd65wklht40f7zccjsyc5zuy7s0ywx6pzdas0krloi5gvcc8wnvj2fp8xqldwdd0lm7pubw83xqto3hkpcdkds38or8ibkbkzl3hmet8bkmkhpgctho426g26jsheqyuzq4ubpyvvwrq75luvh43ez33zwt8dlvyxdzdt0orm9qyjc984wtlx1efknal7zod4fzedc16jpgh80v2dqhjcgq94tl2b64sgkgezbvcvhfje011xqezrw4o6kvl3ipz5j6xpi64xumxkqov1yf6pqha822hstxnf3uf5yhf60nojvpez8dg7r44n3o7ce22mv9ouueeh4lobg56i1pjuczbw8wl8xzrxh35fznfqm8tqfo1k46kw23ah6fy7l10n1u4s9ntdqms4bpelu768xxvmtmch0kd21dchbvy5togk6og56vp0t8snlkj49po5bj840coa8632n5cu3k5qhxryaeslbucfpeud8g2mv8xo6mnmgnn6v2vpt55xa39jdpcvbven0g2p7jzls5dgp0yl0gydlgzp6j55kq4uqqy084r002y0jdykz6rpu9dr2tejmd8be6carc2scvncffalq0galve30lbvyrxg943n0dwwp6722ivdrcjieinaxxm5yfixieaech4cak2qlzm8uz2lbhrv0nlhz42wkxgyh5vjcn5xebzm6i24fadwu3uovxjk0rg5z6exrcyoiy0a5hh546q39u4fgf6l4kuclgu5k0izbcgk38wpbuxayo7q59rvk6xd509bnne35hgat4z3qdkt67cuqjlsm2i6ng5tlv9dfaq4njv7zwj58jd8f5l0am9mibp5wofq2sacm9xoo97kityvdhgdxsm8c1t34t9svw58chsjaf5qclhyon9wlcy35c8i49ognd46qugjuhs66y520aghy6d98la7h3hrh4yolyu0t1xe8q9fdojn8r9drvifr5jc7nc8zef8pavq4ukya6pcirmfcyvajl9sbn0uaks4pj57n8egae7r4g20czq8x4py90d1mq4gj6euy1im0ajz3ncnf5kggz02yy6h2gqxkss47lpxgc3or1o6cwjksliww6rq1iug1wzekcclj8tsmmhrnd6r4h19eh36u4kbrzg8elauhzki03yenthf84fwaqa01mud54vv4upxfs7m6e5fmosdsrv4n0oy1t0ur4gs2rqdafq5aiu3n5651o9468tsenrcwul9d8w7i3y6hb2ybbmltftt2wm050dc8a2cjefucjzfg1l06sjx5u806usump3ch60c7kv4xhlh9flr1yuv46cua5q8asqw88n24f7xc2vyu64pfwvx4gzfdy23tp7qbd25p8yi2lvx33zoh4as4y2uigu19w0ll9dy5k3iu95izoudhnmly0p8e5c05z9xbe95axj9ztk6ffsfphpdfo1jku31d9vnnn4zncptkbgei84ojkh030fgwaxes8668e4l9dcv8reh5cezgbm8k0mtb3k6u0f81yrgv6g0kgamtmtwx8j0jiu8a9ehj5tsfnabnphamqwwuu0yhg8qdm0tzjz3n5wsmbirwx814u0opwy297ws6n0jmspp60aon0f360zs8142d4iwxssbhk2bvxi4a1a7fh608bu72lxqi9k4rcj78ot4jc0in42ch4vwkwyvcsak7dqnneta1k6pv2oth82ybcp2ji7u1e4zo3kpmbabxktd6qjouo2utic7prqwdfd61b1bj022n0ato0kdxulkimlnqo8i4j4oft87o61ytc7gb6dn8nv6oleusahvhnq7trj4pdxc5kv3axxc79njzxziio1wotd0aa4wdqo3aku5gtgj73xms6eim2k4a7s124fp3hodlwt63qvuuftem3wcslxbx3x4peztzejoitivrf6wsuthqslqg2cx0f1jdbzz1i6iwig8u8a353min9otavz6aw097fzb7sj1380e8s5amzp7bduf1krl40y490z7o5d2txm2wvzra2mzl65uassrxniw2bpwn1uuu0soeec9psfmow5f1488dbqa3s8b9ch5tjswdrufa7zolg4bro64ccly7vrsktox3xzed9u92q8rtebhifwr488j79w96qs61ayly909cprkngkrz5dv6orcda23wlk53gtg6z4958f6a1aduux8d585kyshfp6kwrxwvft2k210yohbmj3863y17ttr 00:27:47.239 13:51:26 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:47.239 13:51:26 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:47.239 13:51:26 -- dd/common.sh@31 -- # xtrace_disable 00:27:47.239 13:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:47.239 [2024-07-10 13:51:26.515061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:47.239 [2024-07-10 13:51:26.515182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138090 ] 00:27:47.239 { 00:27:47.239 "subsystems": [ 00:27:47.239 { 00:27:47.239 "subsystem": "bdev", 00:27:47.239 "config": [ 00:27:47.239 { 00:27:47.239 "params": { 00:27:47.239 "trtype": "pcie", 00:27:47.239 "traddr": "0000:00:06.0", 00:27:47.239 "name": "Nvme0" 00:27:47.239 }, 00:27:47.239 "method": "bdev_nvme_attach_controller" 00:27:47.239 }, 00:27:47.239 { 00:27:47.239 "method": "bdev_wait_for_examine" 00:27:47.239 } 00:27:47.239 ] 00:27:47.239 } 00:27:47.239 ] 00:27:47.239 } 00:27:47.498 [2024-07-10 13:51:26.677011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.758 [2024-07-10 13:51:26.900515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.398  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:49.398 00:27:49.398 13:51:28 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:49.398 13:51:28 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:49.398 13:51:28 -- dd/common.sh@31 -- # xtrace_disable 00:27:49.398 13:51:28 -- common/autotest_common.sh@10 -- # set +x 00:27:49.398 [2024-07-10 13:51:28.662633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:49.398 [2024-07-10 13:51:28.662763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138126 ] 00:27:49.398 { 00:27:49.398 "subsystems": [ 00:27:49.398 { 00:27:49.398 "subsystem": "bdev", 00:27:49.398 "config": [ 00:27:49.398 { 00:27:49.398 "params": { 00:27:49.398 "trtype": "pcie", 00:27:49.398 "traddr": "0000:00:06.0", 00:27:49.398 "name": "Nvme0" 00:27:49.398 }, 00:27:49.398 "method": "bdev_nvme_attach_controller" 00:27:49.398 }, 00:27:49.398 { 00:27:49.398 "method": "bdev_wait_for_examine" 00:27:49.398 } 00:27:49.398 ] 00:27:49.398 } 00:27:49.398 ] 00:27:49.398 } 00:27:49.658 [2024-07-10 13:51:28.817956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.917 [2024-07-10 13:51:29.031389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.555  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:51.555 00:27:51.555 13:51:30 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:51.556 13:51:30 -- dd/basic_rw.sh@72 -- # [[ r3h5r0ebqi3h8nws356qr9sljoazpz80cgxufc0o2mk0gsgq08tp2c1mv9hwktkdg8jo93g0z042l4k8dzjzf67tg23rh06hf5a2b91bhfyee3x99sevghzgsmhmbwcj2vb8zmho0by1mofc9bdav9r48qbea0v2dc2wh1wl1w9kacb0rua0dpm7i8r99bm1el89pk54vgvil9p1habtdnh3j6fe88eglk8uo99bqgcsuedk6ewe8as03oa2m5enk3g9tlbfemuireog8cs1556ntvi3bxu2tfx563mtbaxc9gzp5rwvpykj6evjg81tjyz43jhf4lujd16g864z52oqcbrwnyzod8e5zwrzkdz85krrqgn572kh5h22ze8wwpiajp0f4iogqtnqv6bek0l4xa1owcmm8i6z283fmzgcq6gtxt9z2p53jprclhh3n2r5oqkranwyso3lixjkwyu8vbkpl71sopxn0b9ltvgqd4p9y61lguowbfl6tsvq54pg2yh1yfehguycz8xe6y2xtdbtnfn0z19hnsvco4urgzo4wnpdu5g9xrz6a6gs4osuewantbbjxrt4b83dmxiqml4c9u8o5rl0ilsx61lnnfqvtmne6n4bqqijgqta3susr7u01rx5a9jdc7ff792j9hwtqzd6xfkn9vkkdqv5bgq9kqo9kwwdwq02hksf60npe00o6tjdbvuz0jesmpwm41c1c025s4kf2elg3yq35dvit1tututy40vvrbg1il30t8ljkf1pmk7jwgpclundxb6bovwf5y7z2eh4gy6epxonr6ldl276tap0t4in3qg08it7hkrl5gvqfl06hjo1kjhkmha120okzx5xkchd90cz6ajrj2pl0h390gl781vkr0o5wjjvx9jq7t4yrgv20n20rikyfcy7kt2n1225702j14wj9hh29sjujd6irx04ffr2a214ayi9fhite0rqrjyydkksieh4b51xz0gtfgretpvysniquzfskcruvzy2dvrjiudt8l2a4nejlckbn51sl9l2ux793cqn48f7ej6yiwof7i8q4n39tytkmcfh5f4pshd8x5bgpo84nl5hhc5o5r2upfy8aislcmn26v1osszjtg3xotda7oxxpcf80qsc3io3wjbe96t3nrxpn8iufg926rp3ljj6ykqhbhtxapfzmy16tm3sjbcr4v2yn8ip0xlb1nhixrj7ve0u9o3ezc1wib248xnww049keh67bb3s02aosb79j2wbmwucft039qf9428qyhb8t68wqalfw5tcwqycsno1azlpzwj013k01wg5bancyzhgzcgui9aygglkntcq5xxa5p1r500u9iz5t3ngln9torehdjnfdt392gqzgtlsk7gkd9tgsk7lnq9p4g9jnkphcf8aiupi5p5houa8mc0b57w13ts0am3t8xkv814usezhiq0qan4eodgvoph2ou8xwd8ceu4mxfy8onszmh3eo62oty2vmi76qcmrudwjr2v5wl2dnogyhohxqk3j5vfvfbxilx3dx5hbrupdgpc340g8xmli8opbl4dd1g1yhiywjetq5rtu7bvt1uqwqqdbeqrz3y6o3ccga54db35kmgpiwffxi7xt6aeg9171dtha3n1v7tmvnp7vuzs3dbp3c9we9rdxk3omwdh8zzw2pw8f4ns5o6zeatqpkv3wultywmus8o045moobznv8rcqdvbkrlxt9j6fpbd5e1pbac4xxkbatnjitsg2tw8j9c8imcowtf00tvhakthcuydpvodcl6agrk5gfgu1preobw8cmse8y2vd65wklht40f7zccjsyc5zuy7s0ywx6pzdas0krloi5gvcc8wnvj2fp8xqldwdd0lm7pubw83xqto3hkpcdkds38or8ibkbkzl3hmet8bkmkhpgctho426g26jsheqyuzq4ubpyvvwrq75luvh43ez33zwt8dlvyxdzdt0orm9qyjc984wtlx1efknal7zod4fzedc16jpgh80v2dqhjcgq94tl2b64sgkgezbvcvhfje011xqezrw4o6kvl3ipz5j6xpi64xumxkqov1yf6pqha822hstxnf3uf5yhf60nojvpez8dg7r44n3o7ce22mv9ouueeh4lobg56i1pjuczbw8wl8xzrxh35fznfqm8tqfo1k46kw23ah6fy7l10n1u4s9ntdqms4bpelu768xxvmtmch0kd21dchbvy5togk6og56vp0t8snlkj49po5bj840coa8632n5cu3k5qhxryaeslbucfpeud8g2mv8xo6mnmgnn6v2vpt55xa39jdpcvbven0g2p7jzls5dgp0yl0gydlgzp6j55kq4uqqy084r002y0jdykz6rpu9dr2tejmd8be6carc2scvncffalq0galve30lbvyrxg943n0dwwp6722ivdrcjieinaxxm5yfixieaech4cak2qlzm8uz2lbhrv0nlhz42wkxgyh5vjcn5xebzm6i24fadwu3uovxjk0rg5z6exrcyoiy0a5hh546q39u4fgf6l4kuclgu5k0izbcgk38wpbuxayo7q59rvk6xd509bnne35hgat4z3qdkt67cuqjlsm2i6ng5tlv9dfaq4njv7zwj58jd8f5l0am9mibp5wofq2sacm9xoo97kityvdhgdxsm8c1t34t9svw58chsjaf5qclhyon9wlcy35c8i49ognd46qugjuhs66y520aghy6d98la7h3hrh4yolyu0t1xe8q9fdojn8r9drvifr5jc7nc8zef8pavq4ukya6pcirmfcyvajl9sbn0uaks4pj57n8egae7r4g20czq8x4py90d1mq4gj6euy1im0ajz3ncnf5kggz02yy6h2gqxkss47lpxgc3or1o6cwjksliww6rq1iug1wzekcclj8tsmmhrnd6r4h19eh36u4kbrzg8elauhzki03yenthf84fwaqa01mud54vv4upxfs7m6e5fmosdsrv4n0oy1t0ur4gs2rqdafq5aiu3n5651o9468tsenrcwul9d8w7i3y6hb2ybbmltftt2wm050dc8a2cjefucjzfg1l06sjx5u806usump3ch60c7kv4xhlh9flr1yuv46cua5q8asqw88n24f7xc2vyu64pfwvx4gzfdy23tp7qbd25p8yi2lvx33zoh4as4y2uigu19w0ll9dy5k3iu95izoudhnmly0p8e5c05z9xbe95axj9ztk6ffsfphpdfo1jku31d9vnnn4zncptkbgei84ojkh030fgwaxes8668e4l9dcv8reh5cezgbm8k0mtb3k6u0f81yrgv6g0kgamtmtwx8j0jiu8a9ehj5tsfnabnphamqwwuu0yhg8qdm0tzjz3n5wsmbirwx814u0opwy297ws6n0jmspp60aon0f360zs8142d4iwxssbhk2bvxi4a1a7fh608bu72lxqi9k4rcj78ot4jc0in42ch4vwkwyvcsak7dqnneta1k6pv2oth82ybcp2ji7u1e4zo3kpmbabxktd6qjouo2utic7prqwdfd61b1bj022n0ato0kdxulkimlnqo8i4j4oft87o61ytc7gb6dn8nv6oleusahvhnq7trj4pdxc5kv3axxc79njzxziio1wotd0aa4wdqo3aku5gtgj73xms6eim2k4a7s124fp3hodlwt63qvuuftem3wcslxbx3x4peztzejoitivrf6wsuthqslqg2cx0f1jdbzz1i6iwig8u8a353min9otavz6aw097fzb7sj1380e8s5amzp7bduf1krl40y490z7o5d2txm2wvzra2mzl65uassrxniw2bpwn1uuu0soeec9psfmow5f1488dbqa3s8b9ch5tjswdrufa7zolg4bro64ccly7vrsktox3xzed9u92q8rtebhifwr488j79w96qs61ayly909cprkngkrz5dv6orcda23wlk53gtg6z4958f6a1aduux8d585kyshfp6kwrxwvft2k210yohbmj3863y17ttr == \r\3\h\5\r\0\e\b\q\i\3\h\8\n\w\s\3\5\6\q\r\9\s\l\j\o\a\z\p\z\8\0\c\g\x\u\f\c\0\o\2\m\k\0\g\s\g\q\0\8\t\p\2\c\1\m\v\9\h\w\k\t\k\d\g\8\j\o\9\3\g\0\z\0\4\2\l\4\k\8\d\z\j\z\f\6\7\t\g\2\3\r\h\0\6\h\f\5\a\2\b\9\1\b\h\f\y\e\e\3\x\9\9\s\e\v\g\h\z\g\s\m\h\m\b\w\c\j\2\v\b\8\z\m\h\o\0\b\y\1\m\o\f\c\9\b\d\a\v\9\r\4\8\q\b\e\a\0\v\2\d\c\2\w\h\1\w\l\1\w\9\k\a\c\b\0\r\u\a\0\d\p\m\7\i\8\r\9\9\b\m\1\e\l\8\9\p\k\5\4\v\g\v\i\l\9\p\1\h\a\b\t\d\n\h\3\j\6\f\e\8\8\e\g\l\k\8\u\o\9\9\b\q\g\c\s\u\e\d\k\6\e\w\e\8\a\s\0\3\o\a\2\m\5\e\n\k\3\g\9\t\l\b\f\e\m\u\i\r\e\o\g\8\c\s\1\5\5\6\n\t\v\i\3\b\x\u\2\t\f\x\5\6\3\m\t\b\a\x\c\9\g\z\p\5\r\w\v\p\y\k\j\6\e\v\j\g\8\1\t\j\y\z\4\3\j\h\f\4\l\u\j\d\1\6\g\8\6\4\z\5\2\o\q\c\b\r\w\n\y\z\o\d\8\e\5\z\w\r\z\k\d\z\8\5\k\r\r\q\g\n\5\7\2\k\h\5\h\2\2\z\e\8\w\w\p\i\a\j\p\0\f\4\i\o\g\q\t\n\q\v\6\b\e\k\0\l\4\x\a\1\o\w\c\m\m\8\i\6\z\2\8\3\f\m\z\g\c\q\6\g\t\x\t\9\z\2\p\5\3\j\p\r\c\l\h\h\3\n\2\r\5\o\q\k\r\a\n\w\y\s\o\3\l\i\x\j\k\w\y\u\8\v\b\k\p\l\7\1\s\o\p\x\n\0\b\9\l\t\v\g\q\d\4\p\9\y\6\1\l\g\u\o\w\b\f\l\6\t\s\v\q\5\4\p\g\2\y\h\1\y\f\e\h\g\u\y\c\z\8\x\e\6\y\2\x\t\d\b\t\n\f\n\0\z\1\9\h\n\s\v\c\o\4\u\r\g\z\o\4\w\n\p\d\u\5\g\9\x\r\z\6\a\6\g\s\4\o\s\u\e\w\a\n\t\b\b\j\x\r\t\4\b\8\3\d\m\x\i\q\m\l\4\c\9\u\8\o\5\r\l\0\i\l\s\x\6\1\l\n\n\f\q\v\t\m\n\e\6\n\4\b\q\q\i\j\g\q\t\a\3\s\u\s\r\7\u\0\1\r\x\5\a\9\j\d\c\7\f\f\7\9\2\j\9\h\w\t\q\z\d\6\x\f\k\n\9\v\k\k\d\q\v\5\b\g\q\9\k\q\o\9\k\w\w\d\w\q\0\2\h\k\s\f\6\0\n\p\e\0\0\o\6\t\j\d\b\v\u\z\0\j\e\s\m\p\w\m\4\1\c\1\c\0\2\5\s\4\k\f\2\e\l\g\3\y\q\3\5\d\v\i\t\1\t\u\t\u\t\y\4\0\v\v\r\b\g\1\i\l\3\0\t\8\l\j\k\f\1\p\m\k\7\j\w\g\p\c\l\u\n\d\x\b\6\b\o\v\w\f\5\y\7\z\2\e\h\4\g\y\6\e\p\x\o\n\r\6\l\d\l\2\7\6\t\a\p\0\t\4\i\n\3\q\g\0\8\i\t\7\h\k\r\l\5\g\v\q\f\l\0\6\h\j\o\1\k\j\h\k\m\h\a\1\2\0\o\k\z\x\5\x\k\c\h\d\9\0\c\z\6\a\j\r\j\2\p\l\0\h\3\9\0\g\l\7\8\1\v\k\r\0\o\5\w\j\j\v\x\9\j\q\7\t\4\y\r\g\v\2\0\n\2\0\r\i\k\y\f\c\y\7\k\t\2\n\1\2\2\5\7\0\2\j\1\4\w\j\9\h\h\2\9\s\j\u\j\d\6\i\r\x\0\4\f\f\r\2\a\2\1\4\a\y\i\9\f\h\i\t\e\0\r\q\r\j\y\y\d\k\k\s\i\e\h\4\b\5\1\x\z\0\g\t\f\g\r\e\t\p\v\y\s\n\i\q\u\z\f\s\k\c\r\u\v\z\y\2\d\v\r\j\i\u\d\t\8\l\2\a\4\n\e\j\l\c\k\b\n\5\1\s\l\9\l\2\u\x\7\9\3\c\q\n\4\8\f\7\e\j\6\y\i\w\o\f\7\i\8\q\4\n\3\9\t\y\t\k\m\c\f\h\5\f\4\p\s\h\d\8\x\5\b\g\p\o\8\4\n\l\5\h\h\c\5\o\5\r\2\u\p\f\y\8\a\i\s\l\c\m\n\2\6\v\1\o\s\s\z\j\t\g\3\x\o\t\d\a\7\o\x\x\p\c\f\8\0\q\s\c\3\i\o\3\w\j\b\e\9\6\t\3\n\r\x\p\n\8\i\u\f\g\9\2\6\r\p\3\l\j\j\6\y\k\q\h\b\h\t\x\a\p\f\z\m\y\1\6\t\m\3\s\j\b\c\r\4\v\2\y\n\8\i\p\0\x\l\b\1\n\h\i\x\r\j\7\v\e\0\u\9\o\3\e\z\c\1\w\i\b\2\4\8\x\n\w\w\0\4\9\k\e\h\6\7\b\b\3\s\0\2\a\o\s\b\7\9\j\2\w\b\m\w\u\c\f\t\0\3\9\q\f\9\4\2\8\q\y\h\b\8\t\6\8\w\q\a\l\f\w\5\t\c\w\q\y\c\s\n\o\1\a\z\l\p\z\w\j\0\1\3\k\0\1\w\g\5\b\a\n\c\y\z\h\g\z\c\g\u\i\9\a\y\g\g\l\k\n\t\c\q\5\x\x\a\5\p\1\r\5\0\0\u\9\i\z\5\t\3\n\g\l\n\9\t\o\r\e\h\d\j\n\f\d\t\3\9\2\g\q\z\g\t\l\s\k\7\g\k\d\9\t\g\s\k\7\l\n\q\9\p\4\g\9\j\n\k\p\h\c\f\8\a\i\u\p\i\5\p\5\h\o\u\a\8\m\c\0\b\5\7\w\1\3\t\s\0\a\m\3\t\8\x\k\v\8\1\4\u\s\e\z\h\i\q\0\q\a\n\4\e\o\d\g\v\o\p\h\2\o\u\8\x\w\d\8\c\e\u\4\m\x\f\y\8\o\n\s\z\m\h\3\e\o\6\2\o\t\y\2\v\m\i\7\6\q\c\m\r\u\d\w\j\r\2\v\5\w\l\2\d\n\o\g\y\h\o\h\x\q\k\3\j\5\v\f\v\f\b\x\i\l\x\3\d\x\5\h\b\r\u\p\d\g\p\c\3\4\0\g\8\x\m\l\i\8\o\p\b\l\4\d\d\1\g\1\y\h\i\y\w\j\e\t\q\5\r\t\u\7\b\v\t\1\u\q\w\q\q\d\b\e\q\r\z\3\y\6\o\3\c\c\g\a\5\4\d\b\3\5\k\m\g\p\i\w\f\f\x\i\7\x\t\6\a\e\g\9\1\7\1\d\t\h\a\3\n\1\v\7\t\m\v\n\p\7\v\u\z\s\3\d\b\p\3\c\9\w\e\9\r\d\x\k\3\o\m\w\d\h\8\z\z\w\2\p\w\8\f\4\n\s\5\o\6\z\e\a\t\q\p\k\v\3\w\u\l\t\y\w\m\u\s\8\o\0\4\5\m\o\o\b\z\n\v\8\r\c\q\d\v\b\k\r\l\x\t\9\j\6\f\p\b\d\5\e\1\p\b\a\c\4\x\x\k\b\a\t\n\j\i\t\s\g\2\t\w\8\j\9\c\8\i\m\c\o\w\t\f\0\0\t\v\h\a\k\t\h\c\u\y\d\p\v\o\d\c\l\6\a\g\r\k\5\g\f\g\u\1\p\r\e\o\b\w\8\c\m\s\e\8\y\2\v\d\6\5\w\k\l\h\t\4\0\f\7\z\c\c\j\s\y\c\5\z\u\y\7\s\0\y\w\x\6\p\z\d\a\s\0\k\r\l\o\i\5\g\v\c\c\8\w\n\v\j\2\f\p\8\x\q\l\d\w\d\d\0\l\m\7\p\u\b\w\8\3\x\q\t\o\3\h\k\p\c\d\k\d\s\3\8\o\r\8\i\b\k\b\k\z\l\3\h\m\e\t\8\b\k\m\k\h\p\g\c\t\h\o\4\2\6\g\2\6\j\s\h\e\q\y\u\z\q\4\u\b\p\y\v\v\w\r\q\7\5\l\u\v\h\4\3\e\z\3\3\z\w\t\8\d\l\v\y\x\d\z\d\t\0\o\r\m\9\q\y\j\c\9\8\4\w\t\l\x\1\e\f\k\n\a\l\7\z\o\d\4\f\z\e\d\c\1\6\j\p\g\h\8\0\v\2\d\q\h\j\c\g\q\9\4\t\l\2\b\6\4\s\g\k\g\e\z\b\v\c\v\h\f\j\e\0\1\1\x\q\e\z\r\w\4\o\6\k\v\l\3\i\p\z\5\j\6\x\p\i\6\4\x\u\m\x\k\q\o\v\1\y\f\6\p\q\h\a\8\2\2\h\s\t\x\n\f\3\u\f\5\y\h\f\6\0\n\o\j\v\p\e\z\8\d\g\7\r\4\4\n\3\o\7\c\e\2\2\m\v\9\o\u\u\e\e\h\4\l\o\b\g\5\6\i\1\p\j\u\c\z\b\w\8\w\l\8\x\z\r\x\h\3\5\f\z\n\f\q\m\8\t\q\f\o\1\k\4\6\k\w\2\3\a\h\6\f\y\7\l\1\0\n\1\u\4\s\9\n\t\d\q\m\s\4\b\p\e\l\u\7\6\8\x\x\v\m\t\m\c\h\0\k\d\2\1\d\c\h\b\v\y\5\t\o\g\k\6\o\g\5\6\v\p\0\t\8\s\n\l\k\j\4\9\p\o\5\b\j\8\4\0\c\o\a\8\6\3\2\n\5\c\u\3\k\5\q\h\x\r\y\a\e\s\l\b\u\c\f\p\e\u\d\8\g\2\m\v\8\x\o\6\m\n\m\g\n\n\6\v\2\v\p\t\5\5\x\a\3\9\j\d\p\c\v\b\v\e\n\0\g\2\p\7\j\z\l\s\5\d\g\p\0\y\l\0\g\y\d\l\g\z\p\6\j\5\5\k\q\4\u\q\q\y\0\8\4\r\0\0\2\y\0\j\d\y\k\z\6\r\p\u\9\d\r\2\t\e\j\m\d\8\b\e\6\c\a\r\c\2\s\c\v\n\c\f\f\a\l\q\0\g\a\l\v\e\3\0\l\b\v\y\r\x\g\9\4\3\n\0\d\w\w\p\6\7\2\2\i\v\d\r\c\j\i\e\i\n\a\x\x\m\5\y\f\i\x\i\e\a\e\c\h\4\c\a\k\2\q\l\z\m\8\u\z\2\l\b\h\r\v\0\n\l\h\z\4\2\w\k\x\g\y\h\5\v\j\c\n\5\x\e\b\z\m\6\i\2\4\f\a\d\w\u\3\u\o\v\x\j\k\0\r\g\5\z\6\e\x\r\c\y\o\i\y\0\a\5\h\h\5\4\6\q\3\9\u\4\f\g\f\6\l\4\k\u\c\l\g\u\5\k\0\i\z\b\c\g\k\3\8\w\p\b\u\x\a\y\o\7\q\5\9\r\v\k\6\x\d\5\0\9\b\n\n\e\3\5\h\g\a\t\4\z\3\q\d\k\t\6\7\c\u\q\j\l\s\m\2\i\6\n\g\5\t\l\v\9\d\f\a\q\4\n\j\v\7\z\w\j\5\8\j\d\8\f\5\l\0\a\m\9\m\i\b\p\5\w\o\f\q\2\s\a\c\m\9\x\o\o\9\7\k\i\t\y\v\d\h\g\d\x\s\m\8\c\1\t\3\4\t\9\s\v\w\5\8\c\h\s\j\a\f\5\q\c\l\h\y\o\n\9\w\l\c\y\3\5\c\8\i\4\9\o\g\n\d\4\6\q\u\g\j\u\h\s\6\6\y\5\2\0\a\g\h\y\6\d\9\8\l\a\7\h\3\h\r\h\4\y\o\l\y\u\0\t\1\x\e\8\q\9\f\d\o\j\n\8\r\9\d\r\v\i\f\r\5\j\c\7\n\c\8\z\e\f\8\p\a\v\q\4\u\k\y\a\6\p\c\i\r\m\f\c\y\v\a\j\l\9\s\b\n\0\u\a\k\s\4\p\j\5\7\n\8\e\g\a\e\7\r\4\g\2\0\c\z\q\8\x\4\p\y\9\0\d\1\m\q\4\g\j\6\e\u\y\1\i\m\0\a\j\z\3\n\c\n\f\5\k\g\g\z\0\2\y\y\6\h\2\g\q\x\k\s\s\4\7\l\p\x\g\c\3\o\r\1\o\6\c\w\j\k\s\l\i\w\w\6\r\q\1\i\u\g\1\w\z\e\k\c\c\l\j\8\t\s\m\m\h\r\n\d\6\r\4\h\1\9\e\h\3\6\u\4\k\b\r\z\g\8\e\l\a\u\h\z\k\i\0\3\y\e\n\t\h\f\8\4\f\w\a\q\a\0\1\m\u\d\5\4\v\v\4\u\p\x\f\s\7\m\6\e\5\f\m\o\s\d\s\r\v\4\n\0\o\y\1\t\0\u\r\4\g\s\2\r\q\d\a\f\q\5\a\i\u\3\n\5\6\5\1\o\9\4\6\8\t\s\e\n\r\c\w\u\l\9\d\8\w\7\i\3\y\6\h\b\2\y\b\b\m\l\t\f\t\t\2\w\m\0\5\0\d\c\8\a\2\c\j\e\f\u\c\j\z\f\g\1\l\0\6\s\j\x\5\u\8\0\6\u\s\u\m\p\3\c\h\6\0\c\7\k\v\4\x\h\l\h\9\f\l\r\1\y\u\v\4\6\c\u\a\5\q\8\a\s\q\w\8\8\n\2\4\f\7\x\c\2\v\y\u\6\4\p\f\w\v\x\4\g\z\f\d\y\2\3\t\p\7\q\b\d\2\5\p\8\y\i\2\l\v\x\3\3\z\o\h\4\a\s\4\y\2\u\i\g\u\1\9\w\0\l\l\9\d\y\5\k\3\i\u\9\5\i\z\o\u\d\h\n\m\l\y\0\p\8\e\5\c\0\5\z\9\x\b\e\9\5\a\x\j\9\z\t\k\6\f\f\s\f\p\h\p\d\f\o\1\j\k\u\3\1\d\9\v\n\n\n\4\z\n\c\p\t\k\b\g\e\i\8\4\o\j\k\h\0\3\0\f\g\w\a\x\e\s\8\6\6\8\e\4\l\9\d\c\v\8\r\e\h\5\c\e\z\g\b\m\8\k\0\m\t\b\3\k\6\u\0\f\8\1\y\r\g\v\6\g\0\k\g\a\m\t\m\t\w\x\8\j\0\j\i\u\8\a\9\e\h\j\5\t\s\f\n\a\b\n\p\h\a\m\q\w\w\u\u\0\y\h\g\8\q\d\m\0\t\z\j\z\3\n\5\w\s\m\b\i\r\w\x\8\1\4\u\0\o\p\w\y\2\9\7\w\s\6\n\0\j\m\s\p\p\6\0\a\o\n\0\f\3\6\0\z\s\8\1\4\2\d\4\i\w\x\s\s\b\h\k\2\b\v\x\i\4\a\1\a\7\f\h\6\0\8\b\u\7\2\l\x\q\i\9\k\4\r\c\j\7\8\o\t\4\j\c\0\i\n\4\2\c\h\4\v\w\k\w\y\v\c\s\a\k\7\d\q\n\n\e\t\a\1\k\6\p\v\2\o\t\h\8\2\y\b\c\p\2\j\i\7\u\1\e\4\z\o\3\k\p\m\b\a\b\x\k\t\d\6\q\j\o\u\o\2\u\t\i\c\7\p\r\q\w\d\f\d\6\1\b\1\b\j\0\2\2\n\0\a\t\o\0\k\d\x\u\l\k\i\m\l\n\q\o\8\i\4\j\4\o\f\t\8\7\o\6\1\y\t\c\7\g\b\6\d\n\8\n\v\6\o\l\e\u\s\a\h\v\h\n\q\7\t\r\j\4\p\d\x\c\5\k\v\3\a\x\x\c\7\9\n\j\z\x\z\i\i\o\1\w\o\t\d\0\a\a\4\w\d\q\o\3\a\k\u\5\g\t\g\j\7\3\x\m\s\6\e\i\m\2\k\4\a\7\s\1\2\4\f\p\3\h\o\d\l\w\t\6\3\q\v\u\u\f\t\e\m\3\w\c\s\l\x\b\x\3\x\4\p\e\z\t\z\e\j\o\i\t\i\v\r\f\6\w\s\u\t\h\q\s\l\q\g\2\c\x\0\f\1\j\d\b\z\z\1\i\6\i\w\i\g\8\u\8\a\3\5\3\m\i\n\9\o\t\a\v\z\6\a\w\0\9\7\f\z\b\7\s\j\1\3\8\0\e\8\s\5\a\m\z\p\7\b\d\u\f\1\k\r\l\4\0\y\4\9\0\z\7\o\5\d\2\t\x\m\2\w\v\z\r\a\2\m\z\l\6\5\u\a\s\s\r\x\n\i\w\2\b\p\w\n\1\u\u\u\0\s\o\e\e\c\9\p\s\f\m\o\w\5\f\1\4\8\8\d\b\q\a\3\s\8\b\9\c\h\5\t\j\s\w\d\r\u\f\a\7\z\o\l\g\4\b\r\o\6\4\c\c\l\y\7\v\r\s\k\t\o\x\3\x\z\e\d\9\u\9\2\q\8\r\t\e\b\h\i\f\w\r\4\8\8\j\7\9\w\9\6\q\s\6\1\a\y\l\y\9\0\9\c\p\r\k\n\g\k\r\z\5\d\v\6\o\r\c\d\a\2\3\w\l\k\5\3\g\t\g\6\z\4\9\5\8\f\6\a\1\a\d\u\u\x\8\d\5\8\5\k\y\s\h\f\p\6\k\w\r\x\w\v\f\t\2\k\2\1\0\y\o\h\b\m\j\3\8\6\3\y\1\7\t\t\r ]] 00:27:51.556 ************************************ 00:27:51.556 END TEST dd_rw_offset 00:27:51.556 ************************************ 00:27:51.556 00:27:51.556 real 0m4.425s 00:27:51.556 user 0m3.852s 00:27:51.556 sys 0m0.438s 00:27:51.556 13:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:51.556 13:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.556 13:51:30 -- dd/basic_rw.sh@1 -- # cleanup 00:27:51.556 13:51:30 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:51.556 13:51:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:51.556 13:51:30 -- dd/common.sh@11 -- # local nvme_ref= 00:27:51.556 13:51:30 -- dd/common.sh@12 -- # local size=0xffff 00:27:51.556 13:51:30 -- dd/common.sh@14 -- # local bs=1048576 00:27:51.556 13:51:30 -- dd/common.sh@15 -- # local count=1 00:27:51.556 13:51:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:51.556 13:51:30 -- dd/common.sh@18 -- # gen_conf 00:27:51.556 13:51:30 -- dd/common.sh@31 -- # xtrace_disable 00:27:51.556 13:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.814 [2024-07-10 13:51:30.940352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:51.814 [2024-07-10 13:51:30.940812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138198 ] 00:27:51.814 { 00:27:51.814 "subsystems": [ 00:27:51.814 { 00:27:51.814 "subsystem": "bdev", 00:27:51.814 "config": [ 00:27:51.814 { 00:27:51.814 "params": { 00:27:51.814 "trtype": "pcie", 00:27:51.814 "traddr": "0000:00:06.0", 00:27:51.814 "name": "Nvme0" 00:27:51.814 }, 00:27:51.814 "method": "bdev_nvme_attach_controller" 00:27:51.814 }, 00:27:51.814 { 00:27:51.814 "method": "bdev_wait_for_examine" 00:27:51.814 } 00:27:51.814 ] 00:27:51.814 } 00:27:51.814 ] 00:27:51.814 } 00:27:51.814 [2024-07-10 13:51:31.095691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.074 [2024-07-10 13:51:31.310262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.025  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:54.025 00:27:54.025 13:51:32 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:54.025 ************************************ 00:27:54.025 END TEST spdk_dd_basic_rw 00:27:54.025 ************************************ 00:27:54.025 00:27:54.025 real 0m51.072s 00:27:54.025 user 0m44.682s 00:27:54.025 sys 0m4.950s 00:27:54.025 13:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.025 13:51:32 -- common/autotest_common.sh@10 -- # set +x 00:27:54.025 13:51:33 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:54.025 13:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:54.025 13:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.025 13:51:33 -- common/autotest_common.sh@10 -- # set +x 00:27:54.025 ************************************ 00:27:54.025 START TEST spdk_dd_posix 00:27:54.025 ************************************ 00:27:54.025 13:51:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:54.025 * Looking for test storage... 00:27:54.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:54.025 13:51:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:54.025 13:51:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.025 13:51:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.025 13:51:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.025 13:51:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:54.025 13:51:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:54.025 13:51:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:54.025 13:51:33 -- paths/export.sh@5 -- # export PATH 00:27:54.025 13:51:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:54.025 13:51:33 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:54.025 13:51:33 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:54.025 13:51:33 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:54.025 13:51:33 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:54.025 13:51:33 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:54.025 13:51:33 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:54.025 13:51:33 -- dd/posix.sh@130 -- # tests 00:27:54.025 13:51:33 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:54.025 * First test run, using AIO 00:27:54.025 13:51:33 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:54.025 13:51:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:54.025 13:51:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.025 13:51:33 -- common/autotest_common.sh@10 -- # set +x 00:27:54.025 ************************************ 00:27:54.025 START TEST dd_flag_append 00:27:54.025 ************************************ 00:27:54.025 13:51:33 -- common/autotest_common.sh@1104 -- # append 00:27:54.025 13:51:33 -- dd/posix.sh@16 -- # local dump0 00:27:54.025 13:51:33 -- dd/posix.sh@17 -- # local dump1 00:27:54.025 13:51:33 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:54.025 13:51:33 -- dd/common.sh@98 -- # xtrace_disable 00:27:54.025 13:51:33 -- common/autotest_common.sh@10 -- # set +x 00:27:54.025 13:51:33 -- dd/posix.sh@19 -- # dump0=19h2t10kbredhe8pdkni4cadthmms5s4 00:27:54.025 13:51:33 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:54.025 13:51:33 -- dd/common.sh@98 -- # xtrace_disable 00:27:54.025 13:51:33 -- common/autotest_common.sh@10 -- # set +x 00:27:54.025 13:51:33 -- dd/posix.sh@20 -- # dump1=oe48ydd42i9j54k08v2d7u8bcktf8ti9 00:27:54.025 13:51:33 -- dd/posix.sh@22 -- # printf %s 19h2t10kbredhe8pdkni4cadthmms5s4 00:27:54.025 13:51:33 -- dd/posix.sh@23 -- # printf %s oe48ydd42i9j54k08v2d7u8bcktf8ti9 00:27:54.026 13:51:33 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:54.026 [2024-07-10 13:51:33.252036] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:54.026 [2024-07-10 13:51:33.252195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138279 ] 00:27:54.285 [2024-07-10 13:51:33.407853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.285 [2024-07-10 13:51:33.619494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.232  Copying: 32/32 [B] (average 31 kBps) 00:27:56.232 00:27:56.232 13:51:35 -- dd/posix.sh@27 -- # [[ oe48ydd42i9j54k08v2d7u8bcktf8ti919h2t10kbredhe8pdkni4cadthmms5s4 == \o\e\4\8\y\d\d\4\2\i\9\j\5\4\k\0\8\v\2\d\7\u\8\b\c\k\t\f\8\t\i\9\1\9\h\2\t\1\0\k\b\r\e\d\h\e\8\p\d\k\n\i\4\c\a\d\t\h\m\m\s\5\s\4 ]] 00:27:56.232 00:27:56.232 real 0m2.167s 00:27:56.232 user 0m1.820s 00:27:56.232 sys 0m0.208s 00:27:56.232 13:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.232 ************************************ 00:27:56.232 END TEST dd_flag_append 00:27:56.232 ************************************ 00:27:56.232 13:51:35 -- common/autotest_common.sh@10 -- # set +x 00:27:56.232 13:51:35 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:56.232 13:51:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:56.232 13:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:56.232 13:51:35 -- common/autotest_common.sh@10 -- # set +x 00:27:56.232 ************************************ 00:27:56.232 START TEST dd_flag_directory 00:27:56.232 ************************************ 00:27:56.232 13:51:35 -- common/autotest_common.sh@1104 -- # directory 00:27:56.232 13:51:35 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:56.232 13:51:35 -- common/autotest_common.sh@640 -- # local es=0 00:27:56.232 13:51:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:56.232 13:51:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:56.232 13:51:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:56.232 13:51:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:56.232 13:51:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:56.232 13:51:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:56.232 13:51:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:56.232 13:51:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:56.232 13:51:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:56.232 13:51:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:56.232 [2024-07-10 13:51:35.476289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:56.232 [2024-07-10 13:51:35.476410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138332 ] 00:27:56.491 [2024-07-10 13:51:35.631608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.491 [2024-07-10 13:51:35.842777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.060 [2024-07-10 13:51:36.190543] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:57.060 [2024-07-10 13:51:36.190621] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:57.060 [2024-07-10 13:51:36.190637] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:58.000 [2024-07-10 13:51:37.065673] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:58.260 13:51:37 -- common/autotest_common.sh@643 -- # es=236 00:27:58.260 13:51:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:58.260 13:51:37 -- common/autotest_common.sh@652 -- # es=108 00:27:58.260 13:51:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:58.260 13:51:37 -- common/autotest_common.sh@660 -- # es=1 00:27:58.260 13:51:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:58.260 13:51:37 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:58.260 13:51:37 -- common/autotest_common.sh@640 -- # local es=0 00:27:58.260 13:51:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:58.260 13:51:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.260 13:51:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:58.260 13:51:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.260 13:51:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:58.260 13:51:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.260 13:51:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:58.260 13:51:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:58.260 13:51:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:58.260 13:51:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:58.260 [2024-07-10 13:51:37.569844] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:58.260 [2024-07-10 13:51:37.569980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138364 ] 00:27:58.519 [2024-07-10 13:51:37.726707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.778 [2024-07-10 13:51:37.942643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.038 [2024-07-10 13:51:38.294242] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:59.038 [2024-07-10 13:51:38.294316] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:59.038 [2024-07-10 13:51:38.294335] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:00.013 [2024-07-10 13:51:39.175111] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:00.581 13:51:39 -- common/autotest_common.sh@643 -- # es=236 00:28:00.581 13:51:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:00.581 13:51:39 -- common/autotest_common.sh@652 -- # es=108 00:28:00.581 13:51:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:00.581 13:51:39 -- common/autotest_common.sh@660 -- # es=1 00:28:00.581 13:51:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:00.581 00:28:00.581 real 0m4.222s 00:28:00.581 user 0m3.645s 00:28:00.581 sys 0m0.377s 00:28:00.581 13:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.581 13:51:39 -- common/autotest_common.sh@10 -- # set +x 00:28:00.581 ************************************ 00:28:00.581 END TEST dd_flag_directory 00:28:00.581 ************************************ 00:28:00.581 13:51:39 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:28:00.581 13:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:00.581 13:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.581 13:51:39 -- common/autotest_common.sh@10 -- # set +x 00:28:00.581 ************************************ 00:28:00.581 START TEST dd_flag_nofollow 00:28:00.581 ************************************ 00:28:00.581 13:51:39 -- common/autotest_common.sh@1104 -- # nofollow 00:28:00.581 13:51:39 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:00.581 13:51:39 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:00.581 13:51:39 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:00.581 13:51:39 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:00.581 13:51:39 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:00.581 13:51:39 -- common/autotest_common.sh@640 -- # local es=0 00:28:00.581 13:51:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:00.581 13:51:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.581 13:51:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.581 13:51:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.581 13:51:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.581 13:51:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.581 13:51:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.581 13:51:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.581 13:51:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:00.581 13:51:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:00.581 [2024-07-10 13:51:39.749507] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:00.581 [2024-07-10 13:51:39.749648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138415 ] 00:28:00.581 [2024-07-10 13:51:39.904943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.841 [2024-07-10 13:51:40.130113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.410 [2024-07-10 13:51:40.494672] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:01.410 [2024-07-10 13:51:40.494760] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:01.410 [2024-07-10 13:51:40.494777] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:02.348 [2024-07-10 13:51:41.354858] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:02.607 13:51:41 -- common/autotest_common.sh@643 -- # es=216 00:28:02.607 13:51:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:02.607 13:51:41 -- common/autotest_common.sh@652 -- # es=88 00:28:02.607 13:51:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:02.607 13:51:41 -- common/autotest_common.sh@660 -- # es=1 00:28:02.607 13:51:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:02.607 13:51:41 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:02.607 13:51:41 -- common/autotest_common.sh@640 -- # local es=0 00:28:02.607 13:51:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:02.607 13:51:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.607 13:51:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.607 13:51:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.607 13:51:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.607 13:51:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.607 13:51:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.607 13:51:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.607 13:51:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:02.607 13:51:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:02.607 [2024-07-10 13:51:41.865582] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:02.607 [2024-07-10 13:51:41.865736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138456 ] 00:28:02.866 [2024-07-10 13:51:42.024396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.127 [2024-07-10 13:51:42.240317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.386 [2024-07-10 13:51:42.600772] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:03.386 [2024-07-10 13:51:42.600846] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:03.386 [2024-07-10 13:51:42.600881] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:04.328 [2024-07-10 13:51:43.478786] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:04.593 13:51:43 -- common/autotest_common.sh@643 -- # es=216 00:28:04.593 13:51:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:04.593 13:51:43 -- common/autotest_common.sh@652 -- # es=88 00:28:04.593 13:51:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:04.593 13:51:43 -- common/autotest_common.sh@660 -- # es=1 00:28:04.593 13:51:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:04.593 13:51:43 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:04.593 13:51:43 -- dd/common.sh@98 -- # xtrace_disable 00:28:04.593 13:51:43 -- common/autotest_common.sh@10 -- # set +x 00:28:04.593 13:51:43 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:04.850 [2024-07-10 13:51:43.993974] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:04.850 [2024-07-10 13:51:43.994115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138483 ] 00:28:04.850 [2024-07-10 13:51:44.151281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.108 [2024-07-10 13:51:44.367261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.047  Copying: 512/512 [B] (average 500 kBps) 00:28:07.047 00:28:07.047 13:51:46 -- dd/posix.sh@49 -- # [[ nv5ekfkoxt2ufyy9mnu6n2u1fr04q406a304kw1v50sgus19dwcme1qysjlm3v676op8cpmdoeds3tyk66lcmqoqv7y291fh4srbqczuyyq3nwlc3q0qj79lcw1p4wr94xeuk2xackmc8eqxnls9xz1zk8wsg15nfreh4yh619t1p12bhzgknxm5u57y8zg284p1m4pnf54p7u640zvtyfk3btt0pgluhbwwsvjss75f7ib684okydjqgoy6wdexdvf59ebpjkn1ly4qa16scapng9b3wsai9m3t07u5m05v7tv9th67fprsqt0sl86rbbjr4lprwcb37twqv43dl5ebjqyihnnuiksze188lu8pz2gvpyjqtsq9rhww18sotmufhyal0a0035beq4zzex6n0yt5rkfcsux7in5fhbawl7v00642b8wpc0vyu3ba4pzw1i91fl02if13fk33gahep6qgxcz6gkcw1cth8amzzbbkxdj9l76s78sx0w4t == \n\v\5\e\k\f\k\o\x\t\2\u\f\y\y\9\m\n\u\6\n\2\u\1\f\r\0\4\q\4\0\6\a\3\0\4\k\w\1\v\5\0\s\g\u\s\1\9\d\w\c\m\e\1\q\y\s\j\l\m\3\v\6\7\6\o\p\8\c\p\m\d\o\e\d\s\3\t\y\k\6\6\l\c\m\q\o\q\v\7\y\2\9\1\f\h\4\s\r\b\q\c\z\u\y\y\q\3\n\w\l\c\3\q\0\q\j\7\9\l\c\w\1\p\4\w\r\9\4\x\e\u\k\2\x\a\c\k\m\c\8\e\q\x\n\l\s\9\x\z\1\z\k\8\w\s\g\1\5\n\f\r\e\h\4\y\h\6\1\9\t\1\p\1\2\b\h\z\g\k\n\x\m\5\u\5\7\y\8\z\g\2\8\4\p\1\m\4\p\n\f\5\4\p\7\u\6\4\0\z\v\t\y\f\k\3\b\t\t\0\p\g\l\u\h\b\w\w\s\v\j\s\s\7\5\f\7\i\b\6\8\4\o\k\y\d\j\q\g\o\y\6\w\d\e\x\d\v\f\5\9\e\b\p\j\k\n\1\l\y\4\q\a\1\6\s\c\a\p\n\g\9\b\3\w\s\a\i\9\m\3\t\0\7\u\5\m\0\5\v\7\t\v\9\t\h\6\7\f\p\r\s\q\t\0\s\l\8\6\r\b\b\j\r\4\l\p\r\w\c\b\3\7\t\w\q\v\4\3\d\l\5\e\b\j\q\y\i\h\n\n\u\i\k\s\z\e\1\8\8\l\u\8\p\z\2\g\v\p\y\j\q\t\s\q\9\r\h\w\w\1\8\s\o\t\m\u\f\h\y\a\l\0\a\0\0\3\5\b\e\q\4\z\z\e\x\6\n\0\y\t\5\r\k\f\c\s\u\x\7\i\n\5\f\h\b\a\w\l\7\v\0\0\6\4\2\b\8\w\p\c\0\v\y\u\3\b\a\4\p\z\w\1\i\9\1\f\l\0\2\i\f\1\3\f\k\3\3\g\a\h\e\p\6\q\g\x\c\z\6\g\k\c\w\1\c\t\h\8\a\m\z\z\b\b\k\x\d\j\9\l\7\6\s\7\8\s\x\0\w\4\t ]] 00:28:07.047 00:28:07.047 real 0m6.365s 00:28:07.047 user 0m5.421s 00:28:07.047 sys 0m0.608s 00:28:07.047 ************************************ 00:28:07.047 END TEST dd_flag_nofollow 00:28:07.047 ************************************ 00:28:07.047 13:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.047 13:51:46 -- common/autotest_common.sh@10 -- # set +x 00:28:07.047 13:51:46 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:28:07.047 13:51:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:07.047 13:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:07.047 13:51:46 -- common/autotest_common.sh@10 -- # set +x 00:28:07.047 ************************************ 00:28:07.047 START TEST dd_flag_noatime 00:28:07.047 ************************************ 00:28:07.047 13:51:46 -- common/autotest_common.sh@1104 -- # noatime 00:28:07.047 13:51:46 -- dd/posix.sh@53 -- # local atime_if 00:28:07.047 13:51:46 -- dd/posix.sh@54 -- # local atime_of 00:28:07.047 13:51:46 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:07.047 13:51:46 -- dd/common.sh@98 -- # xtrace_disable 00:28:07.047 13:51:46 -- common/autotest_common.sh@10 -- # set +x 00:28:07.047 13:51:46 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:07.047 13:51:46 -- dd/posix.sh@60 -- # atime_if=1720619504 00:28:07.047 13:51:46 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:07.047 13:51:46 -- dd/posix.sh@61 -- # atime_of=1720619506 00:28:07.047 13:51:46 -- dd/posix.sh@66 -- # sleep 1 00:28:07.986 13:51:47 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:07.986 [2024-07-10 13:51:47.202061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:07.986 [2024-07-10 13:51:47.202290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138554 ] 00:28:08.307 [2024-07-10 13:51:47.357643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.307 [2024-07-10 13:51:47.573492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.960  Copying: 512/512 [B] (average 500 kBps) 00:28:09.960 00:28:09.960 13:51:49 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:09.960 13:51:49 -- dd/posix.sh@69 -- # (( atime_if == 1720619504 )) 00:28:09.960 13:51:49 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:09.960 13:51:49 -- dd/posix.sh@70 -- # (( atime_of == 1720619506 )) 00:28:09.960 13:51:49 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:10.219 [2024-07-10 13:51:49.351517] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:10.219 [2024-07-10 13:51:49.351753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138585 ] 00:28:10.219 [2024-07-10 13:51:49.508599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.476 [2024-07-10 13:51:49.729510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.418  Copying: 512/512 [B] (average 500 kBps) 00:28:12.418 00:28:12.418 13:51:51 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:12.418 13:51:51 -- dd/posix.sh@73 -- # (( atime_if < 1720619510 )) 00:28:12.418 00:28:12.418 real 0m5.362s 00:28:12.418 user 0m3.640s 00:28:12.418 sys 0m0.441s 00:28:12.418 13:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.418 13:51:51 -- common/autotest_common.sh@10 -- # set +x 00:28:12.418 ************************************ 00:28:12.418 END TEST dd_flag_noatime 00:28:12.418 ************************************ 00:28:12.418 13:51:51 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:28:12.418 13:51:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:12.418 13:51:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.418 13:51:51 -- common/autotest_common.sh@10 -- # set +x 00:28:12.418 ************************************ 00:28:12.418 START TEST dd_flags_misc 00:28:12.418 ************************************ 00:28:12.418 13:51:51 -- common/autotest_common.sh@1104 -- # io 00:28:12.418 13:51:51 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:12.418 13:51:51 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:12.418 13:51:51 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:12.418 13:51:51 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:12.418 13:51:51 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:12.418 13:51:51 -- dd/common.sh@98 -- # xtrace_disable 00:28:12.418 13:51:51 -- common/autotest_common.sh@10 -- # set +x 00:28:12.418 13:51:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:12.418 13:51:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:12.418 [2024-07-10 13:51:51.599286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:12.418 [2024-07-10 13:51:51.599797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138650 ] 00:28:12.418 [2024-07-10 13:51:51.755228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.678 [2024-07-10 13:51:51.975296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.668  Copying: 512/512 [B] (average 500 kBps) 00:28:14.668 00:28:14.668 13:51:53 -- dd/posix.sh@93 -- # [[ 5exefv6pcw7on80zvnay1dt2tvrqy8ox9e1k4gdebmkzjpx53xkv3hz4jujrxde94qrlqi903131m3r3vhtw32pyv93o9urlt7yp2l4qa8h6dh10wotfw01qifq7k6vdozjk3j4sofhpjtphhgbtrt6m8hx81gih9cytf1kb9yff0ivccdxo3rri95sxj2g4wpuwy2j8ic0kal6opztq5tz4o4gt3w1lxr0lvobbzk99kdoevc19dfr99b7y0fl9yajvrpsgd3yqabzds7wm1c6l4yatbdp5fqh77hab4zng3cojnrkml1lbl3q3vdhcfx7i6nlu1pfqyj2ot4twq4tpyl2g1ntg4ks3zcfc1if0vame47aps1ll0wqt9src4w8cg9wh42lyvrftcv14c79z41t6ogw3u7h8mzg0bbmfmy1n6xh3tx2gh2nza9wxd3bvdzwj01x8s7qwljtuyo8sfvxr479bdr4l08meot0vbfyc6g3r6689mfbnb1j7 == \5\e\x\e\f\v\6\p\c\w\7\o\n\8\0\z\v\n\a\y\1\d\t\2\t\v\r\q\y\8\o\x\9\e\1\k\4\g\d\e\b\m\k\z\j\p\x\5\3\x\k\v\3\h\z\4\j\u\j\r\x\d\e\9\4\q\r\l\q\i\9\0\3\1\3\1\m\3\r\3\v\h\t\w\3\2\p\y\v\9\3\o\9\u\r\l\t\7\y\p\2\l\4\q\a\8\h\6\d\h\1\0\w\o\t\f\w\0\1\q\i\f\q\7\k\6\v\d\o\z\j\k\3\j\4\s\o\f\h\p\j\t\p\h\h\g\b\t\r\t\6\m\8\h\x\8\1\g\i\h\9\c\y\t\f\1\k\b\9\y\f\f\0\i\v\c\c\d\x\o\3\r\r\i\9\5\s\x\j\2\g\4\w\p\u\w\y\2\j\8\i\c\0\k\a\l\6\o\p\z\t\q\5\t\z\4\o\4\g\t\3\w\1\l\x\r\0\l\v\o\b\b\z\k\9\9\k\d\o\e\v\c\1\9\d\f\r\9\9\b\7\y\0\f\l\9\y\a\j\v\r\p\s\g\d\3\y\q\a\b\z\d\s\7\w\m\1\c\6\l\4\y\a\t\b\d\p\5\f\q\h\7\7\h\a\b\4\z\n\g\3\c\o\j\n\r\k\m\l\1\l\b\l\3\q\3\v\d\h\c\f\x\7\i\6\n\l\u\1\p\f\q\y\j\2\o\t\4\t\w\q\4\t\p\y\l\2\g\1\n\t\g\4\k\s\3\z\c\f\c\1\i\f\0\v\a\m\e\4\7\a\p\s\1\l\l\0\w\q\t\9\s\r\c\4\w\8\c\g\9\w\h\4\2\l\y\v\r\f\t\c\v\1\4\c\7\9\z\4\1\t\6\o\g\w\3\u\7\h\8\m\z\g\0\b\b\m\f\m\y\1\n\6\x\h\3\t\x\2\g\h\2\n\z\a\9\w\x\d\3\b\v\d\z\w\j\0\1\x\8\s\7\q\w\l\j\t\u\y\o\8\s\f\v\x\r\4\7\9\b\d\r\4\l\0\8\m\e\o\t\0\v\b\f\y\c\6\g\3\r\6\6\8\9\m\f\b\n\b\1\j\7 ]] 00:28:14.668 13:51:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:14.668 13:51:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:14.668 [2024-07-10 13:51:53.741761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:14.668 [2024-07-10 13:51:53.741978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138683 ] 00:28:14.668 [2024-07-10 13:51:53.898610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.927 [2024-07-10 13:51:54.112333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.586  Copying: 512/512 [B] (average 500 kBps) 00:28:16.586 00:28:16.586 13:51:55 -- dd/posix.sh@93 -- # [[ 5exefv6pcw7on80zvnay1dt2tvrqy8ox9e1k4gdebmkzjpx53xkv3hz4jujrxde94qrlqi903131m3r3vhtw32pyv93o9urlt7yp2l4qa8h6dh10wotfw01qifq7k6vdozjk3j4sofhpjtphhgbtrt6m8hx81gih9cytf1kb9yff0ivccdxo3rri95sxj2g4wpuwy2j8ic0kal6opztq5tz4o4gt3w1lxr0lvobbzk99kdoevc19dfr99b7y0fl9yajvrpsgd3yqabzds7wm1c6l4yatbdp5fqh77hab4zng3cojnrkml1lbl3q3vdhcfx7i6nlu1pfqyj2ot4twq4tpyl2g1ntg4ks3zcfc1if0vame47aps1ll0wqt9src4w8cg9wh42lyvrftcv14c79z41t6ogw3u7h8mzg0bbmfmy1n6xh3tx2gh2nza9wxd3bvdzwj01x8s7qwljtuyo8sfvxr479bdr4l08meot0vbfyc6g3r6689mfbnb1j7 == \5\e\x\e\f\v\6\p\c\w\7\o\n\8\0\z\v\n\a\y\1\d\t\2\t\v\r\q\y\8\o\x\9\e\1\k\4\g\d\e\b\m\k\z\j\p\x\5\3\x\k\v\3\h\z\4\j\u\j\r\x\d\e\9\4\q\r\l\q\i\9\0\3\1\3\1\m\3\r\3\v\h\t\w\3\2\p\y\v\9\3\o\9\u\r\l\t\7\y\p\2\l\4\q\a\8\h\6\d\h\1\0\w\o\t\f\w\0\1\q\i\f\q\7\k\6\v\d\o\z\j\k\3\j\4\s\o\f\h\p\j\t\p\h\h\g\b\t\r\t\6\m\8\h\x\8\1\g\i\h\9\c\y\t\f\1\k\b\9\y\f\f\0\i\v\c\c\d\x\o\3\r\r\i\9\5\s\x\j\2\g\4\w\p\u\w\y\2\j\8\i\c\0\k\a\l\6\o\p\z\t\q\5\t\z\4\o\4\g\t\3\w\1\l\x\r\0\l\v\o\b\b\z\k\9\9\k\d\o\e\v\c\1\9\d\f\r\9\9\b\7\y\0\f\l\9\y\a\j\v\r\p\s\g\d\3\y\q\a\b\z\d\s\7\w\m\1\c\6\l\4\y\a\t\b\d\p\5\f\q\h\7\7\h\a\b\4\z\n\g\3\c\o\j\n\r\k\m\l\1\l\b\l\3\q\3\v\d\h\c\f\x\7\i\6\n\l\u\1\p\f\q\y\j\2\o\t\4\t\w\q\4\t\p\y\l\2\g\1\n\t\g\4\k\s\3\z\c\f\c\1\i\f\0\v\a\m\e\4\7\a\p\s\1\l\l\0\w\q\t\9\s\r\c\4\w\8\c\g\9\w\h\4\2\l\y\v\r\f\t\c\v\1\4\c\7\9\z\4\1\t\6\o\g\w\3\u\7\h\8\m\z\g\0\b\b\m\f\m\y\1\n\6\x\h\3\t\x\2\g\h\2\n\z\a\9\w\x\d\3\b\v\d\z\w\j\0\1\x\8\s\7\q\w\l\j\t\u\y\o\8\s\f\v\x\r\4\7\9\b\d\r\4\l\0\8\m\e\o\t\0\v\b\f\y\c\6\g\3\r\6\6\8\9\m\f\b\n\b\1\j\7 ]] 00:28:16.586 13:51:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:16.586 13:51:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:16.586 [2024-07-10 13:51:55.878478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:16.586 [2024-07-10 13:51:55.878618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138707 ] 00:28:16.844 [2024-07-10 13:51:56.029265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.103 [2024-07-10 13:51:56.247124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.740  Copying: 512/512 [B] (average 250 kBps) 00:28:18.740 00:28:18.741 13:51:57 -- dd/posix.sh@93 -- # [[ 5exefv6pcw7on80zvnay1dt2tvrqy8ox9e1k4gdebmkzjpx53xkv3hz4jujrxde94qrlqi903131m3r3vhtw32pyv93o9urlt7yp2l4qa8h6dh10wotfw01qifq7k6vdozjk3j4sofhpjtphhgbtrt6m8hx81gih9cytf1kb9yff0ivccdxo3rri95sxj2g4wpuwy2j8ic0kal6opztq5tz4o4gt3w1lxr0lvobbzk99kdoevc19dfr99b7y0fl9yajvrpsgd3yqabzds7wm1c6l4yatbdp5fqh77hab4zng3cojnrkml1lbl3q3vdhcfx7i6nlu1pfqyj2ot4twq4tpyl2g1ntg4ks3zcfc1if0vame47aps1ll0wqt9src4w8cg9wh42lyvrftcv14c79z41t6ogw3u7h8mzg0bbmfmy1n6xh3tx2gh2nza9wxd3bvdzwj01x8s7qwljtuyo8sfvxr479bdr4l08meot0vbfyc6g3r6689mfbnb1j7 == \5\e\x\e\f\v\6\p\c\w\7\o\n\8\0\z\v\n\a\y\1\d\t\2\t\v\r\q\y\8\o\x\9\e\1\k\4\g\d\e\b\m\k\z\j\p\x\5\3\x\k\v\3\h\z\4\j\u\j\r\x\d\e\9\4\q\r\l\q\i\9\0\3\1\3\1\m\3\r\3\v\h\t\w\3\2\p\y\v\9\3\o\9\u\r\l\t\7\y\p\2\l\4\q\a\8\h\6\d\h\1\0\w\o\t\f\w\0\1\q\i\f\q\7\k\6\v\d\o\z\j\k\3\j\4\s\o\f\h\p\j\t\p\h\h\g\b\t\r\t\6\m\8\h\x\8\1\g\i\h\9\c\y\t\f\1\k\b\9\y\f\f\0\i\v\c\c\d\x\o\3\r\r\i\9\5\s\x\j\2\g\4\w\p\u\w\y\2\j\8\i\c\0\k\a\l\6\o\p\z\t\q\5\t\z\4\o\4\g\t\3\w\1\l\x\r\0\l\v\o\b\b\z\k\9\9\k\d\o\e\v\c\1\9\d\f\r\9\9\b\7\y\0\f\l\9\y\a\j\v\r\p\s\g\d\3\y\q\a\b\z\d\s\7\w\m\1\c\6\l\4\y\a\t\b\d\p\5\f\q\h\7\7\h\a\b\4\z\n\g\3\c\o\j\n\r\k\m\l\1\l\b\l\3\q\3\v\d\h\c\f\x\7\i\6\n\l\u\1\p\f\q\y\j\2\o\t\4\t\w\q\4\t\p\y\l\2\g\1\n\t\g\4\k\s\3\z\c\f\c\1\i\f\0\v\a\m\e\4\7\a\p\s\1\l\l\0\w\q\t\9\s\r\c\4\w\8\c\g\9\w\h\4\2\l\y\v\r\f\t\c\v\1\4\c\7\9\z\4\1\t\6\o\g\w\3\u\7\h\8\m\z\g\0\b\b\m\f\m\y\1\n\6\x\h\3\t\x\2\g\h\2\n\z\a\9\w\x\d\3\b\v\d\z\w\j\0\1\x\8\s\7\q\w\l\j\t\u\y\o\8\s\f\v\x\r\4\7\9\b\d\r\4\l\0\8\m\e\o\t\0\v\b\f\y\c\6\g\3\r\6\6\8\9\m\f\b\n\b\1\j\7 ]] 00:28:18.741 13:51:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:18.741 13:51:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:18.741 [2024-07-10 13:51:58.020448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:18.741 [2024-07-10 13:51:58.020653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138736 ] 00:28:18.999 [2024-07-10 13:51:58.175654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.257 [2024-07-10 13:51:58.393715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.895  Copying: 512/512 [B] (average 250 kBps) 00:28:20.895 00:28:20.895 13:52:00 -- dd/posix.sh@93 -- # [[ 5exefv6pcw7on80zvnay1dt2tvrqy8ox9e1k4gdebmkzjpx53xkv3hz4jujrxde94qrlqi903131m3r3vhtw32pyv93o9urlt7yp2l4qa8h6dh10wotfw01qifq7k6vdozjk3j4sofhpjtphhgbtrt6m8hx81gih9cytf1kb9yff0ivccdxo3rri95sxj2g4wpuwy2j8ic0kal6opztq5tz4o4gt3w1lxr0lvobbzk99kdoevc19dfr99b7y0fl9yajvrpsgd3yqabzds7wm1c6l4yatbdp5fqh77hab4zng3cojnrkml1lbl3q3vdhcfx7i6nlu1pfqyj2ot4twq4tpyl2g1ntg4ks3zcfc1if0vame47aps1ll0wqt9src4w8cg9wh42lyvrftcv14c79z41t6ogw3u7h8mzg0bbmfmy1n6xh3tx2gh2nza9wxd3bvdzwj01x8s7qwljtuyo8sfvxr479bdr4l08meot0vbfyc6g3r6689mfbnb1j7 == \5\e\x\e\f\v\6\p\c\w\7\o\n\8\0\z\v\n\a\y\1\d\t\2\t\v\r\q\y\8\o\x\9\e\1\k\4\g\d\e\b\m\k\z\j\p\x\5\3\x\k\v\3\h\z\4\j\u\j\r\x\d\e\9\4\q\r\l\q\i\9\0\3\1\3\1\m\3\r\3\v\h\t\w\3\2\p\y\v\9\3\o\9\u\r\l\t\7\y\p\2\l\4\q\a\8\h\6\d\h\1\0\w\o\t\f\w\0\1\q\i\f\q\7\k\6\v\d\o\z\j\k\3\j\4\s\o\f\h\p\j\t\p\h\h\g\b\t\r\t\6\m\8\h\x\8\1\g\i\h\9\c\y\t\f\1\k\b\9\y\f\f\0\i\v\c\c\d\x\o\3\r\r\i\9\5\s\x\j\2\g\4\w\p\u\w\y\2\j\8\i\c\0\k\a\l\6\o\p\z\t\q\5\t\z\4\o\4\g\t\3\w\1\l\x\r\0\l\v\o\b\b\z\k\9\9\k\d\o\e\v\c\1\9\d\f\r\9\9\b\7\y\0\f\l\9\y\a\j\v\r\p\s\g\d\3\y\q\a\b\z\d\s\7\w\m\1\c\6\l\4\y\a\t\b\d\p\5\f\q\h\7\7\h\a\b\4\z\n\g\3\c\o\j\n\r\k\m\l\1\l\b\l\3\q\3\v\d\h\c\f\x\7\i\6\n\l\u\1\p\f\q\y\j\2\o\t\4\t\w\q\4\t\p\y\l\2\g\1\n\t\g\4\k\s\3\z\c\f\c\1\i\f\0\v\a\m\e\4\7\a\p\s\1\l\l\0\w\q\t\9\s\r\c\4\w\8\c\g\9\w\h\4\2\l\y\v\r\f\t\c\v\1\4\c\7\9\z\4\1\t\6\o\g\w\3\u\7\h\8\m\z\g\0\b\b\m\f\m\y\1\n\6\x\h\3\t\x\2\g\h\2\n\z\a\9\w\x\d\3\b\v\d\z\w\j\0\1\x\8\s\7\q\w\l\j\t\u\y\o\8\s\f\v\x\r\4\7\9\b\d\r\4\l\0\8\m\e\o\t\0\v\b\f\y\c\6\g\3\r\6\6\8\9\m\f\b\n\b\1\j\7 ]] 00:28:20.895 13:52:00 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:20.895 13:52:00 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:20.895 13:52:00 -- dd/common.sh@98 -- # xtrace_disable 00:28:20.895 13:52:00 -- common/autotest_common.sh@10 -- # set +x 00:28:20.895 13:52:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:20.895 13:52:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:20.895 [2024-07-10 13:52:00.207332] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:20.895 [2024-07-10 13:52:00.207469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138765 ] 00:28:21.154 [2024-07-10 13:52:00.367189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.412 [2024-07-10 13:52:00.588416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.116  Copying: 512/512 [B] (average 500 kBps) 00:28:23.116 00:28:23.116 13:52:02 -- dd/posix.sh@93 -- # [[ b37vmb38uj531qdc3qvp52357ftqw33v485jyvagct1jk6n3106bnwjgpojoh5p9r3mqzilud05i9d0i9tq5iibfz6bwrw2zw8yjdxgeybc3aj3kyjr5ovvdgup6t15vwuztrtkxdgxycq4pijq877u2u2t7mr5wagnvf098fix5ahyzkb5fhubisbw0ne06cqd7ba6g7larqa8zv07b3x7ja2seubkbaysq0ajgfvrwn831dlumgvze5jvjt82os060ren8gb0oo062u5st134ueqr3wp4tlord875c4wg4z0lxp9aqpdeskn35phpmp3vgv9cf1aip8ve3n3v54do50jzbda42272aekpevcknnemt80cs7nfgl81k0zoml44rdqqm98hnh7palb95gqjknr1i11gnziknpyt2ekxth3lap6nq7bet33e1fi0pow5yoghawmz92k62255y0q5blhkucdx3jd9l8y21f22xz7whnpyfx8spzy4zg4nz == \b\3\7\v\m\b\3\8\u\j\5\3\1\q\d\c\3\q\v\p\5\2\3\5\7\f\t\q\w\3\3\v\4\8\5\j\y\v\a\g\c\t\1\j\k\6\n\3\1\0\6\b\n\w\j\g\p\o\j\o\h\5\p\9\r\3\m\q\z\i\l\u\d\0\5\i\9\d\0\i\9\t\q\5\i\i\b\f\z\6\b\w\r\w\2\z\w\8\y\j\d\x\g\e\y\b\c\3\a\j\3\k\y\j\r\5\o\v\v\d\g\u\p\6\t\1\5\v\w\u\z\t\r\t\k\x\d\g\x\y\c\q\4\p\i\j\q\8\7\7\u\2\u\2\t\7\m\r\5\w\a\g\n\v\f\0\9\8\f\i\x\5\a\h\y\z\k\b\5\f\h\u\b\i\s\b\w\0\n\e\0\6\c\q\d\7\b\a\6\g\7\l\a\r\q\a\8\z\v\0\7\b\3\x\7\j\a\2\s\e\u\b\k\b\a\y\s\q\0\a\j\g\f\v\r\w\n\8\3\1\d\l\u\m\g\v\z\e\5\j\v\j\t\8\2\o\s\0\6\0\r\e\n\8\g\b\0\o\o\0\6\2\u\5\s\t\1\3\4\u\e\q\r\3\w\p\4\t\l\o\r\d\8\7\5\c\4\w\g\4\z\0\l\x\p\9\a\q\p\d\e\s\k\n\3\5\p\h\p\m\p\3\v\g\v\9\c\f\1\a\i\p\8\v\e\3\n\3\v\5\4\d\o\5\0\j\z\b\d\a\4\2\2\7\2\a\e\k\p\e\v\c\k\n\n\e\m\t\8\0\c\s\7\n\f\g\l\8\1\k\0\z\o\m\l\4\4\r\d\q\q\m\9\8\h\n\h\7\p\a\l\b\9\5\g\q\j\k\n\r\1\i\1\1\g\n\z\i\k\n\p\y\t\2\e\k\x\t\h\3\l\a\p\6\n\q\7\b\e\t\3\3\e\1\f\i\0\p\o\w\5\y\o\g\h\a\w\m\z\9\2\k\6\2\2\5\5\y\0\q\5\b\l\h\k\u\c\d\x\3\j\d\9\l\8\y\2\1\f\2\2\x\z\7\w\h\n\p\y\f\x\8\s\p\z\y\4\z\g\4\n\z ]] 00:28:23.116 13:52:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:23.116 13:52:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:23.116 [2024-07-10 13:52:02.378614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:23.116 [2024-07-10 13:52:02.378746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138810 ] 00:28:23.374 [2024-07-10 13:52:02.533629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.634 [2024-07-10 13:52:02.758319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.271  Copying: 512/512 [B] (average 500 kBps) 00:28:25.271 00:28:25.271 13:52:04 -- dd/posix.sh@93 -- # [[ b37vmb38uj531qdc3qvp52357ftqw33v485jyvagct1jk6n3106bnwjgpojoh5p9r3mqzilud05i9d0i9tq5iibfz6bwrw2zw8yjdxgeybc3aj3kyjr5ovvdgup6t15vwuztrtkxdgxycq4pijq877u2u2t7mr5wagnvf098fix5ahyzkb5fhubisbw0ne06cqd7ba6g7larqa8zv07b3x7ja2seubkbaysq0ajgfvrwn831dlumgvze5jvjt82os060ren8gb0oo062u5st134ueqr3wp4tlord875c4wg4z0lxp9aqpdeskn35phpmp3vgv9cf1aip8ve3n3v54do50jzbda42272aekpevcknnemt80cs7nfgl81k0zoml44rdqqm98hnh7palb95gqjknr1i11gnziknpyt2ekxth3lap6nq7bet33e1fi0pow5yoghawmz92k62255y0q5blhkucdx3jd9l8y21f22xz7whnpyfx8spzy4zg4nz == \b\3\7\v\m\b\3\8\u\j\5\3\1\q\d\c\3\q\v\p\5\2\3\5\7\f\t\q\w\3\3\v\4\8\5\j\y\v\a\g\c\t\1\j\k\6\n\3\1\0\6\b\n\w\j\g\p\o\j\o\h\5\p\9\r\3\m\q\z\i\l\u\d\0\5\i\9\d\0\i\9\t\q\5\i\i\b\f\z\6\b\w\r\w\2\z\w\8\y\j\d\x\g\e\y\b\c\3\a\j\3\k\y\j\r\5\o\v\v\d\g\u\p\6\t\1\5\v\w\u\z\t\r\t\k\x\d\g\x\y\c\q\4\p\i\j\q\8\7\7\u\2\u\2\t\7\m\r\5\w\a\g\n\v\f\0\9\8\f\i\x\5\a\h\y\z\k\b\5\f\h\u\b\i\s\b\w\0\n\e\0\6\c\q\d\7\b\a\6\g\7\l\a\r\q\a\8\z\v\0\7\b\3\x\7\j\a\2\s\e\u\b\k\b\a\y\s\q\0\a\j\g\f\v\r\w\n\8\3\1\d\l\u\m\g\v\z\e\5\j\v\j\t\8\2\o\s\0\6\0\r\e\n\8\g\b\0\o\o\0\6\2\u\5\s\t\1\3\4\u\e\q\r\3\w\p\4\t\l\o\r\d\8\7\5\c\4\w\g\4\z\0\l\x\p\9\a\q\p\d\e\s\k\n\3\5\p\h\p\m\p\3\v\g\v\9\c\f\1\a\i\p\8\v\e\3\n\3\v\5\4\d\o\5\0\j\z\b\d\a\4\2\2\7\2\a\e\k\p\e\v\c\k\n\n\e\m\t\8\0\c\s\7\n\f\g\l\8\1\k\0\z\o\m\l\4\4\r\d\q\q\m\9\8\h\n\h\7\p\a\l\b\9\5\g\q\j\k\n\r\1\i\1\1\g\n\z\i\k\n\p\y\t\2\e\k\x\t\h\3\l\a\p\6\n\q\7\b\e\t\3\3\e\1\f\i\0\p\o\w\5\y\o\g\h\a\w\m\z\9\2\k\6\2\2\5\5\y\0\q\5\b\l\h\k\u\c\d\x\3\j\d\9\l\8\y\2\1\f\2\2\x\z\7\w\h\n\p\y\f\x\8\s\p\z\y\4\z\g\4\n\z ]] 00:28:25.271 13:52:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:25.271 13:52:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:25.271 [2024-07-10 13:52:04.548270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:25.271 [2024-07-10 13:52:04.548423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138841 ] 00:28:25.531 [2024-07-10 13:52:04.706411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.790 [2024-07-10 13:52:04.929703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.428  Copying: 512/512 [B] (average 83 kBps) 00:28:27.428 00:28:27.428 13:52:06 -- dd/posix.sh@93 -- # [[ b37vmb38uj531qdc3qvp52357ftqw33v485jyvagct1jk6n3106bnwjgpojoh5p9r3mqzilud05i9d0i9tq5iibfz6bwrw2zw8yjdxgeybc3aj3kyjr5ovvdgup6t15vwuztrtkxdgxycq4pijq877u2u2t7mr5wagnvf098fix5ahyzkb5fhubisbw0ne06cqd7ba6g7larqa8zv07b3x7ja2seubkbaysq0ajgfvrwn831dlumgvze5jvjt82os060ren8gb0oo062u5st134ueqr3wp4tlord875c4wg4z0lxp9aqpdeskn35phpmp3vgv9cf1aip8ve3n3v54do50jzbda42272aekpevcknnemt80cs7nfgl81k0zoml44rdqqm98hnh7palb95gqjknr1i11gnziknpyt2ekxth3lap6nq7bet33e1fi0pow5yoghawmz92k62255y0q5blhkucdx3jd9l8y21f22xz7whnpyfx8spzy4zg4nz == \b\3\7\v\m\b\3\8\u\j\5\3\1\q\d\c\3\q\v\p\5\2\3\5\7\f\t\q\w\3\3\v\4\8\5\j\y\v\a\g\c\t\1\j\k\6\n\3\1\0\6\b\n\w\j\g\p\o\j\o\h\5\p\9\r\3\m\q\z\i\l\u\d\0\5\i\9\d\0\i\9\t\q\5\i\i\b\f\z\6\b\w\r\w\2\z\w\8\y\j\d\x\g\e\y\b\c\3\a\j\3\k\y\j\r\5\o\v\v\d\g\u\p\6\t\1\5\v\w\u\z\t\r\t\k\x\d\g\x\y\c\q\4\p\i\j\q\8\7\7\u\2\u\2\t\7\m\r\5\w\a\g\n\v\f\0\9\8\f\i\x\5\a\h\y\z\k\b\5\f\h\u\b\i\s\b\w\0\n\e\0\6\c\q\d\7\b\a\6\g\7\l\a\r\q\a\8\z\v\0\7\b\3\x\7\j\a\2\s\e\u\b\k\b\a\y\s\q\0\a\j\g\f\v\r\w\n\8\3\1\d\l\u\m\g\v\z\e\5\j\v\j\t\8\2\o\s\0\6\0\r\e\n\8\g\b\0\o\o\0\6\2\u\5\s\t\1\3\4\u\e\q\r\3\w\p\4\t\l\o\r\d\8\7\5\c\4\w\g\4\z\0\l\x\p\9\a\q\p\d\e\s\k\n\3\5\p\h\p\m\p\3\v\g\v\9\c\f\1\a\i\p\8\v\e\3\n\3\v\5\4\d\o\5\0\j\z\b\d\a\4\2\2\7\2\a\e\k\p\e\v\c\k\n\n\e\m\t\8\0\c\s\7\n\f\g\l\8\1\k\0\z\o\m\l\4\4\r\d\q\q\m\9\8\h\n\h\7\p\a\l\b\9\5\g\q\j\k\n\r\1\i\1\1\g\n\z\i\k\n\p\y\t\2\e\k\x\t\h\3\l\a\p\6\n\q\7\b\e\t\3\3\e\1\f\i\0\p\o\w\5\y\o\g\h\a\w\m\z\9\2\k\6\2\2\5\5\y\0\q\5\b\l\h\k\u\c\d\x\3\j\d\9\l\8\y\2\1\f\2\2\x\z\7\w\h\n\p\y\f\x\8\s\p\z\y\4\z\g\4\n\z ]] 00:28:27.428 13:52:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:27.428 13:52:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:27.428 [2024-07-10 13:52:06.758128] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:27.428 [2024-07-10 13:52:06.758267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138870 ] 00:28:27.687 [2024-07-10 13:52:06.904119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.946 [2024-07-10 13:52:07.136482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.580  Copying: 512/512 [B] (average 166 kBps) 00:28:29.580 00:28:29.580 ************************************ 00:28:29.580 END TEST dd_flags_misc 00:28:29.580 ************************************ 00:28:29.580 13:52:08 -- dd/posix.sh@93 -- # [[ b37vmb38uj531qdc3qvp52357ftqw33v485jyvagct1jk6n3106bnwjgpojoh5p9r3mqzilud05i9d0i9tq5iibfz6bwrw2zw8yjdxgeybc3aj3kyjr5ovvdgup6t15vwuztrtkxdgxycq4pijq877u2u2t7mr5wagnvf098fix5ahyzkb5fhubisbw0ne06cqd7ba6g7larqa8zv07b3x7ja2seubkbaysq0ajgfvrwn831dlumgvze5jvjt82os060ren8gb0oo062u5st134ueqr3wp4tlord875c4wg4z0lxp9aqpdeskn35phpmp3vgv9cf1aip8ve3n3v54do50jzbda42272aekpevcknnemt80cs7nfgl81k0zoml44rdqqm98hnh7palb95gqjknr1i11gnziknpyt2ekxth3lap6nq7bet33e1fi0pow5yoghawmz92k62255y0q5blhkucdx3jd9l8y21f22xz7whnpyfx8spzy4zg4nz == \b\3\7\v\m\b\3\8\u\j\5\3\1\q\d\c\3\q\v\p\5\2\3\5\7\f\t\q\w\3\3\v\4\8\5\j\y\v\a\g\c\t\1\j\k\6\n\3\1\0\6\b\n\w\j\g\p\o\j\o\h\5\p\9\r\3\m\q\z\i\l\u\d\0\5\i\9\d\0\i\9\t\q\5\i\i\b\f\z\6\b\w\r\w\2\z\w\8\y\j\d\x\g\e\y\b\c\3\a\j\3\k\y\j\r\5\o\v\v\d\g\u\p\6\t\1\5\v\w\u\z\t\r\t\k\x\d\g\x\y\c\q\4\p\i\j\q\8\7\7\u\2\u\2\t\7\m\r\5\w\a\g\n\v\f\0\9\8\f\i\x\5\a\h\y\z\k\b\5\f\h\u\b\i\s\b\w\0\n\e\0\6\c\q\d\7\b\a\6\g\7\l\a\r\q\a\8\z\v\0\7\b\3\x\7\j\a\2\s\e\u\b\k\b\a\y\s\q\0\a\j\g\f\v\r\w\n\8\3\1\d\l\u\m\g\v\z\e\5\j\v\j\t\8\2\o\s\0\6\0\r\e\n\8\g\b\0\o\o\0\6\2\u\5\s\t\1\3\4\u\e\q\r\3\w\p\4\t\l\o\r\d\8\7\5\c\4\w\g\4\z\0\l\x\p\9\a\q\p\d\e\s\k\n\3\5\p\h\p\m\p\3\v\g\v\9\c\f\1\a\i\p\8\v\e\3\n\3\v\5\4\d\o\5\0\j\z\b\d\a\4\2\2\7\2\a\e\k\p\e\v\c\k\n\n\e\m\t\8\0\c\s\7\n\f\g\l\8\1\k\0\z\o\m\l\4\4\r\d\q\q\m\9\8\h\n\h\7\p\a\l\b\9\5\g\q\j\k\n\r\1\i\1\1\g\n\z\i\k\n\p\y\t\2\e\k\x\t\h\3\l\a\p\6\n\q\7\b\e\t\3\3\e\1\f\i\0\p\o\w\5\y\o\g\h\a\w\m\z\9\2\k\6\2\2\5\5\y\0\q\5\b\l\h\k\u\c\d\x\3\j\d\9\l\8\y\2\1\f\2\2\x\z\7\w\h\n\p\y\f\x\8\s\p\z\y\4\z\g\4\n\z ]] 00:28:29.580 00:28:29.580 real 0m17.364s 00:28:29.580 user 0m14.766s 00:28:29.580 sys 0m1.525s 00:28:29.580 13:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.580 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:29.840 13:52:08 -- dd/posix.sh@131 -- # tests_forced_aio 00:28:29.840 13:52:08 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:28:29.840 * Second test run, using AIO 00:28:29.840 13:52:08 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:28:29.840 13:52:08 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:28:29.840 13:52:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:29.840 13:52:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.840 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:29.840 ************************************ 00:28:29.840 START TEST dd_flag_append_forced_aio 00:28:29.840 ************************************ 00:28:29.840 13:52:08 -- common/autotest_common.sh@1104 -- # append 00:28:29.840 13:52:08 -- dd/posix.sh@16 -- # local dump0 00:28:29.840 13:52:08 -- dd/posix.sh@17 -- # local dump1 00:28:29.840 13:52:08 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:29.840 13:52:08 -- dd/common.sh@98 -- # xtrace_disable 00:28:29.840 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:29.840 13:52:08 -- dd/posix.sh@19 -- # dump0=u0fofkczdot36135zzp0y026ug37nnk3 00:28:29.840 13:52:08 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:29.840 13:52:08 -- dd/common.sh@98 -- # xtrace_disable 00:28:29.840 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:28:29.840 13:52:08 -- dd/posix.sh@20 -- # dump1=nanhesa8n79yzjof4542gzt80lowyxss 00:28:29.840 13:52:08 -- dd/posix.sh@22 -- # printf %s u0fofkczdot36135zzp0y026ug37nnk3 00:28:29.840 13:52:08 -- dd/posix.sh@23 -- # printf %s nanhesa8n79yzjof4542gzt80lowyxss 00:28:29.840 13:52:08 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:29.840 [2024-07-10 13:52:09.022938] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:29.840 [2024-07-10 13:52:09.023486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138920 ] 00:28:29.840 [2024-07-10 13:52:09.179623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.098 [2024-07-10 13:52:09.396694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.084  Copying: 32/32 [B] (average 31 kBps) 00:28:32.084 00:28:32.084 13:52:11 -- dd/posix.sh@27 -- # [[ nanhesa8n79yzjof4542gzt80lowyxssu0fofkczdot36135zzp0y026ug37nnk3 == \n\a\n\h\e\s\a\8\n\7\9\y\z\j\o\f\4\5\4\2\g\z\t\8\0\l\o\w\y\x\s\s\u\0\f\o\f\k\c\z\d\o\t\3\6\1\3\5\z\z\p\0\y\0\2\6\u\g\3\7\n\n\k\3 ]] 00:28:32.084 00:28:32.084 real 0m2.211s 00:28:32.084 user 0m1.864s 00:28:32.084 sys 0m0.216s 00:28:32.084 13:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.084 13:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:32.084 ************************************ 00:28:32.084 END TEST dd_flag_append_forced_aio 00:28:32.084 ************************************ 00:28:32.084 13:52:11 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:28:32.084 13:52:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:32.084 13:52:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.084 13:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:32.084 ************************************ 00:28:32.084 START TEST dd_flag_directory_forced_aio 00:28:32.084 ************************************ 00:28:32.084 13:52:11 -- common/autotest_common.sh@1104 -- # directory 00:28:32.084 13:52:11 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:32.084 13:52:11 -- common/autotest_common.sh@640 -- # local es=0 00:28:32.084 13:52:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:32.084 13:52:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.084 13:52:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.084 13:52:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.084 13:52:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.084 13:52:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.084 13:52:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.084 13:52:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.084 13:52:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:32.084 13:52:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:32.084 [2024-07-10 13:52:11.290657] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:32.084 [2024-07-10 13:52:11.290809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138990 ] 00:28:32.343 [2024-07-10 13:52:11.446420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.343 [2024-07-10 13:52:11.675907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.913 [2024-07-10 13:52:12.062783] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:32.913 [2024-07-10 13:52:12.062863] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:32.913 [2024-07-10 13:52:12.062882] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:33.850 [2024-07-10 13:52:12.982076] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:34.109 13:52:13 -- common/autotest_common.sh@643 -- # es=236 00:28:34.109 13:52:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:34.109 13:52:13 -- common/autotest_common.sh@652 -- # es=108 00:28:34.109 13:52:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:34.109 13:52:13 -- common/autotest_common.sh@660 -- # es=1 00:28:34.109 13:52:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:34.109 13:52:13 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:34.109 13:52:13 -- common/autotest_common.sh@640 -- # local es=0 00:28:34.109 13:52:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:34.109 13:52:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:34.109 13:52:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.109 13:52:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:34.109 13:52:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.109 13:52:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:34.369 13:52:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:34.369 13:52:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:34.369 13:52:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:34.369 13:52:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:34.369 [2024-07-10 13:52:13.520911] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:34.369 [2024-07-10 13:52:13.521474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139023 ] 00:28:34.369 [2024-07-10 13:52:13.682620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.627 [2024-07-10 13:52:13.898829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.196 [2024-07-10 13:52:14.256768] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:35.196 [2024-07-10 13:52:14.256860] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:35.196 [2024-07-10 13:52:14.256879] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:36.135 [2024-07-10 13:52:15.130037] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:36.395 13:52:15 -- common/autotest_common.sh@643 -- # es=236 00:28:36.395 13:52:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:36.395 13:52:15 -- common/autotest_common.sh@652 -- # es=108 00:28:36.395 13:52:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:36.395 13:52:15 -- common/autotest_common.sh@660 -- # es=1 00:28:36.395 13:52:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:36.395 00:28:36.395 real 0m4.354s 00:28:36.395 user 0m3.700s 00:28:36.395 sys 0m0.452s 00:28:36.395 13:52:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.395 13:52:15 -- common/autotest_common.sh@10 -- # set +x 00:28:36.395 ************************************ 00:28:36.395 END TEST dd_flag_directory_forced_aio 00:28:36.395 ************************************ 00:28:36.395 13:52:15 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:36.395 13:52:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:36.395 13:52:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:36.395 13:52:15 -- common/autotest_common.sh@10 -- # set +x 00:28:36.395 ************************************ 00:28:36.395 START TEST dd_flag_nofollow_forced_aio 00:28:36.395 ************************************ 00:28:36.395 13:52:15 -- common/autotest_common.sh@1104 -- # nofollow 00:28:36.395 13:52:15 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:36.395 13:52:15 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:36.395 13:52:15 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:36.395 13:52:15 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:36.395 13:52:15 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:36.395 13:52:15 -- common/autotest_common.sh@640 -- # local es=0 00:28:36.395 13:52:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:36.395 13:52:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.395 13:52:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:36.395 13:52:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.395 13:52:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:36.395 13:52:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.395 13:52:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:36.395 13:52:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.395 13:52:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:36.395 13:52:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:36.395 [2024-07-10 13:52:15.719214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:36.395 [2024-07-10 13:52:15.719339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139068 ] 00:28:36.653 [2024-07-10 13:52:15.877894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.952 [2024-07-10 13:52:16.086123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.212 [2024-07-10 13:52:16.438622] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:37.212 [2024-07-10 13:52:16.438699] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:37.212 [2024-07-10 13:52:16.438716] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:38.146 [2024-07-10 13:52:17.305882] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:38.405 13:52:17 -- common/autotest_common.sh@643 -- # es=216 00:28:38.405 13:52:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:38.405 13:52:17 -- common/autotest_common.sh@652 -- # es=88 00:28:38.405 13:52:17 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:38.405 13:52:17 -- common/autotest_common.sh@660 -- # es=1 00:28:38.405 13:52:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:38.405 13:52:17 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:38.405 13:52:17 -- common/autotest_common.sh@640 -- # local es=0 00:28:38.405 13:52:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:38.405 13:52:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:38.664 13:52:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.664 13:52:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:38.664 13:52:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.664 13:52:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:38.664 13:52:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.664 13:52:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:38.664 13:52:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:38.664 13:52:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:38.664 [2024-07-10 13:52:17.808576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:38.664 [2024-07-10 13:52:17.809105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139103 ] 00:28:38.664 [2024-07-10 13:52:17.970236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.924 [2024-07-10 13:52:18.188204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.493 [2024-07-10 13:52:18.557815] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:39.493 [2024-07-10 13:52:18.557888] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:39.493 [2024-07-10 13:52:18.557907] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:40.427 [2024-07-10 13:52:19.439577] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:40.687 13:52:19 -- common/autotest_common.sh@643 -- # es=216 00:28:40.687 13:52:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:40.687 13:52:19 -- common/autotest_common.sh@652 -- # es=88 00:28:40.687 13:52:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:40.687 13:52:19 -- common/autotest_common.sh@660 -- # es=1 00:28:40.687 13:52:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:40.687 13:52:19 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:40.687 13:52:19 -- dd/common.sh@98 -- # xtrace_disable 00:28:40.687 13:52:19 -- common/autotest_common.sh@10 -- # set +x 00:28:40.687 13:52:19 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:40.687 [2024-07-10 13:52:19.952372] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:40.687 [2024-07-10 13:52:19.952500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139135 ] 00:28:40.947 [2024-07-10 13:52:20.113173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.206 [2024-07-10 13:52:20.334571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.904  Copying: 512/512 [B] (average 500 kBps) 00:28:42.904 00:28:42.904 13:52:22 -- dd/posix.sh@49 -- # [[ d4kvo9f88xncte6p6mbywlcnvbyjel1dfl5u7ryxbgb3gxyfugkjdeth1ht8nfc5utdjfnztdj5fgcjpab71tru3kepu4a61naffutbz8a4rqe283wzwz3pffhdr2oho68qvf5pu6fla2ji9aljxofc2qs2usoq68s4n3yp5p5id1rxj4cazpd51f524oxxl6yz62hnz9kob8jnmtost212k6xccfks8qfc4sbn1s33bfljvrnlntnvhdatlszjnlronin7x1k9otrgagbn2v3bev565j1w1ht2wgjr4b166bvsc5k5d49fn2lcmonr584tm3ah5e9olsa8n64ktza5ylmrveeshluhw4hr1xbxompqnfg4ci57787binusohb0kvqztg9hicun5m64t5euw4n2gxkpbw5p3245x8f7uonizxr25bg0yz3h2iiroejq0mtesx56fy7cuyudvo2to0v64ta4hwhip308eeoawi79b7qbxpiwhinpn1ev6 == \d\4\k\v\o\9\f\8\8\x\n\c\t\e\6\p\6\m\b\y\w\l\c\n\v\b\y\j\e\l\1\d\f\l\5\u\7\r\y\x\b\g\b\3\g\x\y\f\u\g\k\j\d\e\t\h\1\h\t\8\n\f\c\5\u\t\d\j\f\n\z\t\d\j\5\f\g\c\j\p\a\b\7\1\t\r\u\3\k\e\p\u\4\a\6\1\n\a\f\f\u\t\b\z\8\a\4\r\q\e\2\8\3\w\z\w\z\3\p\f\f\h\d\r\2\o\h\o\6\8\q\v\f\5\p\u\6\f\l\a\2\j\i\9\a\l\j\x\o\f\c\2\q\s\2\u\s\o\q\6\8\s\4\n\3\y\p\5\p\5\i\d\1\r\x\j\4\c\a\z\p\d\5\1\f\5\2\4\o\x\x\l\6\y\z\6\2\h\n\z\9\k\o\b\8\j\n\m\t\o\s\t\2\1\2\k\6\x\c\c\f\k\s\8\q\f\c\4\s\b\n\1\s\3\3\b\f\l\j\v\r\n\l\n\t\n\v\h\d\a\t\l\s\z\j\n\l\r\o\n\i\n\7\x\1\k\9\o\t\r\g\a\g\b\n\2\v\3\b\e\v\5\6\5\j\1\w\1\h\t\2\w\g\j\r\4\b\1\6\6\b\v\s\c\5\k\5\d\4\9\f\n\2\l\c\m\o\n\r\5\8\4\t\m\3\a\h\5\e\9\o\l\s\a\8\n\6\4\k\t\z\a\5\y\l\m\r\v\e\e\s\h\l\u\h\w\4\h\r\1\x\b\x\o\m\p\q\n\f\g\4\c\i\5\7\7\8\7\b\i\n\u\s\o\h\b\0\k\v\q\z\t\g\9\h\i\c\u\n\5\m\6\4\t\5\e\u\w\4\n\2\g\x\k\p\b\w\5\p\3\2\4\5\x\8\f\7\u\o\n\i\z\x\r\2\5\b\g\0\y\z\3\h\2\i\i\r\o\e\j\q\0\m\t\e\s\x\5\6\f\y\7\c\u\y\u\d\v\o\2\t\o\0\v\6\4\t\a\4\h\w\h\i\p\3\0\8\e\e\o\a\w\i\7\9\b\7\q\b\x\p\i\w\h\i\n\p\n\1\e\v\6 ]] 00:28:42.904 00:28:42.904 real 0m6.425s 00:28:42.904 user 0m5.456s 00:28:42.904 sys 0m0.639s 00:28:42.904 13:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.904 ************************************ 00:28:42.904 END TEST dd_flag_nofollow_forced_aio 00:28:42.904 13:52:22 -- common/autotest_common.sh@10 -- # set +x 00:28:42.904 ************************************ 00:28:42.904 13:52:22 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:42.904 13:52:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:42.904 13:52:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.904 13:52:22 -- common/autotest_common.sh@10 -- # set +x 00:28:42.904 ************************************ 00:28:42.904 START TEST dd_flag_noatime_forced_aio 00:28:42.904 ************************************ 00:28:42.904 13:52:22 -- common/autotest_common.sh@1104 -- # noatime 00:28:42.904 13:52:22 -- dd/posix.sh@53 -- # local atime_if 00:28:42.904 13:52:22 -- dd/posix.sh@54 -- # local atime_of 00:28:42.904 13:52:22 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:42.904 13:52:22 -- dd/common.sh@98 -- # xtrace_disable 00:28:42.904 13:52:22 -- common/autotest_common.sh@10 -- # set +x 00:28:42.904 13:52:22 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:42.904 13:52:22 -- dd/posix.sh@60 -- # atime_if=1720619540 00:28:42.904 13:52:22 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:42.904 13:52:22 -- dd/posix.sh@61 -- # atime_of=1720619542 00:28:42.904 13:52:22 -- dd/posix.sh@66 -- # sleep 1 00:28:43.842 13:52:23 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:44.100 [2024-07-10 13:52:23.209737] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:44.100 [2024-07-10 13:52:23.209896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139222 ] 00:28:44.100 [2024-07-10 13:52:23.370041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.359 [2024-07-10 13:52:23.585976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.000  Copying: 512/512 [B] (average 500 kBps) 00:28:46.000 00:28:46.000 13:52:25 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:46.000 13:52:25 -- dd/posix.sh@69 -- # (( atime_if == 1720619540 )) 00:28:46.000 13:52:25 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.000 13:52:25 -- dd/posix.sh@70 -- # (( atime_of == 1720619542 )) 00:28:46.000 13:52:25 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.260 [2024-07-10 13:52:25.362420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:46.260 [2024-07-10 13:52:25.362573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139249 ] 00:28:46.260 [2024-07-10 13:52:25.523152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.520 [2024-07-10 13:52:25.740583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.160  Copying: 512/512 [B] (average 500 kBps) 00:28:48.160 00:28:48.160 13:52:27 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:48.160 13:52:27 -- dd/posix.sh@73 -- # (( atime_if < 1720619546 )) 00:28:48.160 00:28:48.160 real 0m5.339s 00:28:48.160 user 0m3.651s 00:28:48.160 sys 0m0.417s 00:28:48.160 13:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.160 13:52:27 -- common/autotest_common.sh@10 -- # set +x 00:28:48.160 ************************************ 00:28:48.160 END TEST dd_flag_noatime_forced_aio 00:28:48.160 ************************************ 00:28:48.160 13:52:27 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:48.160 13:52:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:48.160 13:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:48.160 13:52:27 -- common/autotest_common.sh@10 -- # set +x 00:28:48.420 ************************************ 00:28:48.420 START TEST dd_flags_misc_forced_aio 00:28:48.420 ************************************ 00:28:48.420 13:52:27 -- common/autotest_common.sh@1104 -- # io 00:28:48.420 13:52:27 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:48.420 13:52:27 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:48.420 13:52:27 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:48.420 13:52:27 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:48.420 13:52:27 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:48.420 13:52:27 -- dd/common.sh@98 -- # xtrace_disable 00:28:48.420 13:52:27 -- common/autotest_common.sh@10 -- # set +x 00:28:48.420 13:52:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:48.420 13:52:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:48.420 [2024-07-10 13:52:27.596305] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:48.420 [2024-07-10 13:52:27.596449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139297 ] 00:28:48.420 [2024-07-10 13:52:27.753971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.679 [2024-07-10 13:52:27.960764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.625  Copying: 512/512 [B] (average 500 kBps) 00:28:50.625 00:28:50.625 13:52:29 -- dd/posix.sh@93 -- # [[ d5do1i0ta0zk4e1mxbvg6ux307xzlt5zcnx1eicdx0yi56kwzrp29fxr3shjgdi1lmsuinmr2hbhoe3gouw24du9kgrrqdyincfu721v921of66llkeytow7rdi10cy4yrnnfk4g0zvyhbwj6k5b92x9h4ea62kbuztkts2di52670ty9qmwnololcv7h768vqssftxcrct1oliiiqms02hitq362sk03dq5c6wg9ki65l972hlwckxmxv5vhghnbplex5gwwv2hfysna24ma1hzh3wq87ec452wapbybhqtzmlvg3lk11ymenhzzq0x75jpt3gjnprtaf9jwwuxarwn8pk7p3e0w1tr7msrl79btuhh7qpt002awwdkh6p0j73yihk1yq5at6y98tm93hc51ce5634tu3kkeefa4ce76oiwc25k1guhvhml5617wu5ak737azv6jkpsh415qaq4klfrfzbmwoydlyqx6n4d8qeoqe8bljd9mkuzmaqx == \d\5\d\o\1\i\0\t\a\0\z\k\4\e\1\m\x\b\v\g\6\u\x\3\0\7\x\z\l\t\5\z\c\n\x\1\e\i\c\d\x\0\y\i\5\6\k\w\z\r\p\2\9\f\x\r\3\s\h\j\g\d\i\1\l\m\s\u\i\n\m\r\2\h\b\h\o\e\3\g\o\u\w\2\4\d\u\9\k\g\r\r\q\d\y\i\n\c\f\u\7\2\1\v\9\2\1\o\f\6\6\l\l\k\e\y\t\o\w\7\r\d\i\1\0\c\y\4\y\r\n\n\f\k\4\g\0\z\v\y\h\b\w\j\6\k\5\b\9\2\x\9\h\4\e\a\6\2\k\b\u\z\t\k\t\s\2\d\i\5\2\6\7\0\t\y\9\q\m\w\n\o\l\o\l\c\v\7\h\7\6\8\v\q\s\s\f\t\x\c\r\c\t\1\o\l\i\i\i\q\m\s\0\2\h\i\t\q\3\6\2\s\k\0\3\d\q\5\c\6\w\g\9\k\i\6\5\l\9\7\2\h\l\w\c\k\x\m\x\v\5\v\h\g\h\n\b\p\l\e\x\5\g\w\w\v\2\h\f\y\s\n\a\2\4\m\a\1\h\z\h\3\w\q\8\7\e\c\4\5\2\w\a\p\b\y\b\h\q\t\z\m\l\v\g\3\l\k\1\1\y\m\e\n\h\z\z\q\0\x\7\5\j\p\t\3\g\j\n\p\r\t\a\f\9\j\w\w\u\x\a\r\w\n\8\p\k\7\p\3\e\0\w\1\t\r\7\m\s\r\l\7\9\b\t\u\h\h\7\q\p\t\0\0\2\a\w\w\d\k\h\6\p\0\j\7\3\y\i\h\k\1\y\q\5\a\t\6\y\9\8\t\m\9\3\h\c\5\1\c\e\5\6\3\4\t\u\3\k\k\e\e\f\a\4\c\e\7\6\o\i\w\c\2\5\k\1\g\u\h\v\h\m\l\5\6\1\7\w\u\5\a\k\7\3\7\a\z\v\6\j\k\p\s\h\4\1\5\q\a\q\4\k\l\f\r\f\z\b\m\w\o\y\d\l\y\q\x\6\n\4\d\8\q\e\o\q\e\8\b\l\j\d\9\m\k\u\z\m\a\q\x ]] 00:28:50.625 13:52:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:50.625 13:52:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:50.625 [2024-07-10 13:52:29.740431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:50.625 [2024-07-10 13:52:29.740922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139330 ] 00:28:50.625 [2024-07-10 13:52:29.898542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.884 [2024-07-10 13:52:30.114587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.520  Copying: 512/512 [B] (average 500 kBps) 00:28:52.520 00:28:52.520 13:52:31 -- dd/posix.sh@93 -- # [[ d5do1i0ta0zk4e1mxbvg6ux307xzlt5zcnx1eicdx0yi56kwzrp29fxr3shjgdi1lmsuinmr2hbhoe3gouw24du9kgrrqdyincfu721v921of66llkeytow7rdi10cy4yrnnfk4g0zvyhbwj6k5b92x9h4ea62kbuztkts2di52670ty9qmwnololcv7h768vqssftxcrct1oliiiqms02hitq362sk03dq5c6wg9ki65l972hlwckxmxv5vhghnbplex5gwwv2hfysna24ma1hzh3wq87ec452wapbybhqtzmlvg3lk11ymenhzzq0x75jpt3gjnprtaf9jwwuxarwn8pk7p3e0w1tr7msrl79btuhh7qpt002awwdkh6p0j73yihk1yq5at6y98tm93hc51ce5634tu3kkeefa4ce76oiwc25k1guhvhml5617wu5ak737azv6jkpsh415qaq4klfrfzbmwoydlyqx6n4d8qeoqe8bljd9mkuzmaqx == \d\5\d\o\1\i\0\t\a\0\z\k\4\e\1\m\x\b\v\g\6\u\x\3\0\7\x\z\l\t\5\z\c\n\x\1\e\i\c\d\x\0\y\i\5\6\k\w\z\r\p\2\9\f\x\r\3\s\h\j\g\d\i\1\l\m\s\u\i\n\m\r\2\h\b\h\o\e\3\g\o\u\w\2\4\d\u\9\k\g\r\r\q\d\y\i\n\c\f\u\7\2\1\v\9\2\1\o\f\6\6\l\l\k\e\y\t\o\w\7\r\d\i\1\0\c\y\4\y\r\n\n\f\k\4\g\0\z\v\y\h\b\w\j\6\k\5\b\9\2\x\9\h\4\e\a\6\2\k\b\u\z\t\k\t\s\2\d\i\5\2\6\7\0\t\y\9\q\m\w\n\o\l\o\l\c\v\7\h\7\6\8\v\q\s\s\f\t\x\c\r\c\t\1\o\l\i\i\i\q\m\s\0\2\h\i\t\q\3\6\2\s\k\0\3\d\q\5\c\6\w\g\9\k\i\6\5\l\9\7\2\h\l\w\c\k\x\m\x\v\5\v\h\g\h\n\b\p\l\e\x\5\g\w\w\v\2\h\f\y\s\n\a\2\4\m\a\1\h\z\h\3\w\q\8\7\e\c\4\5\2\w\a\p\b\y\b\h\q\t\z\m\l\v\g\3\l\k\1\1\y\m\e\n\h\z\z\q\0\x\7\5\j\p\t\3\g\j\n\p\r\t\a\f\9\j\w\w\u\x\a\r\w\n\8\p\k\7\p\3\e\0\w\1\t\r\7\m\s\r\l\7\9\b\t\u\h\h\7\q\p\t\0\0\2\a\w\w\d\k\h\6\p\0\j\7\3\y\i\h\k\1\y\q\5\a\t\6\y\9\8\t\m\9\3\h\c\5\1\c\e\5\6\3\4\t\u\3\k\k\e\e\f\a\4\c\e\7\6\o\i\w\c\2\5\k\1\g\u\h\v\h\m\l\5\6\1\7\w\u\5\a\k\7\3\7\a\z\v\6\j\k\p\s\h\4\1\5\q\a\q\4\k\l\f\r\f\z\b\m\w\o\y\d\l\y\q\x\6\n\4\d\8\q\e\o\q\e\8\b\l\j\d\9\m\k\u\z\m\a\q\x ]] 00:28:52.520 13:52:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:52.520 13:52:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:52.779 [2024-07-10 13:52:31.885173] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:52.779 [2024-07-10 13:52:31.885321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139376 ] 00:28:52.779 [2024-07-10 13:52:32.034367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.037 [2024-07-10 13:52:32.251009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.673  Copying: 512/512 [B] (average 250 kBps) 00:28:54.674 00:28:54.674 13:52:33 -- dd/posix.sh@93 -- # [[ d5do1i0ta0zk4e1mxbvg6ux307xzlt5zcnx1eicdx0yi56kwzrp29fxr3shjgdi1lmsuinmr2hbhoe3gouw24du9kgrrqdyincfu721v921of66llkeytow7rdi10cy4yrnnfk4g0zvyhbwj6k5b92x9h4ea62kbuztkts2di52670ty9qmwnololcv7h768vqssftxcrct1oliiiqms02hitq362sk03dq5c6wg9ki65l972hlwckxmxv5vhghnbplex5gwwv2hfysna24ma1hzh3wq87ec452wapbybhqtzmlvg3lk11ymenhzzq0x75jpt3gjnprtaf9jwwuxarwn8pk7p3e0w1tr7msrl79btuhh7qpt002awwdkh6p0j73yihk1yq5at6y98tm93hc51ce5634tu3kkeefa4ce76oiwc25k1guhvhml5617wu5ak737azv6jkpsh415qaq4klfrfzbmwoydlyqx6n4d8qeoqe8bljd9mkuzmaqx == \d\5\d\o\1\i\0\t\a\0\z\k\4\e\1\m\x\b\v\g\6\u\x\3\0\7\x\z\l\t\5\z\c\n\x\1\e\i\c\d\x\0\y\i\5\6\k\w\z\r\p\2\9\f\x\r\3\s\h\j\g\d\i\1\l\m\s\u\i\n\m\r\2\h\b\h\o\e\3\g\o\u\w\2\4\d\u\9\k\g\r\r\q\d\y\i\n\c\f\u\7\2\1\v\9\2\1\o\f\6\6\l\l\k\e\y\t\o\w\7\r\d\i\1\0\c\y\4\y\r\n\n\f\k\4\g\0\z\v\y\h\b\w\j\6\k\5\b\9\2\x\9\h\4\e\a\6\2\k\b\u\z\t\k\t\s\2\d\i\5\2\6\7\0\t\y\9\q\m\w\n\o\l\o\l\c\v\7\h\7\6\8\v\q\s\s\f\t\x\c\r\c\t\1\o\l\i\i\i\q\m\s\0\2\h\i\t\q\3\6\2\s\k\0\3\d\q\5\c\6\w\g\9\k\i\6\5\l\9\7\2\h\l\w\c\k\x\m\x\v\5\v\h\g\h\n\b\p\l\e\x\5\g\w\w\v\2\h\f\y\s\n\a\2\4\m\a\1\h\z\h\3\w\q\8\7\e\c\4\5\2\w\a\p\b\y\b\h\q\t\z\m\l\v\g\3\l\k\1\1\y\m\e\n\h\z\z\q\0\x\7\5\j\p\t\3\g\j\n\p\r\t\a\f\9\j\w\w\u\x\a\r\w\n\8\p\k\7\p\3\e\0\w\1\t\r\7\m\s\r\l\7\9\b\t\u\h\h\7\q\p\t\0\0\2\a\w\w\d\k\h\6\p\0\j\7\3\y\i\h\k\1\y\q\5\a\t\6\y\9\8\t\m\9\3\h\c\5\1\c\e\5\6\3\4\t\u\3\k\k\e\e\f\a\4\c\e\7\6\o\i\w\c\2\5\k\1\g\u\h\v\h\m\l\5\6\1\7\w\u\5\a\k\7\3\7\a\z\v\6\j\k\p\s\h\4\1\5\q\a\q\4\k\l\f\r\f\z\b\m\w\o\y\d\l\y\q\x\6\n\4\d\8\q\e\o\q\e\8\b\l\j\d\9\m\k\u\z\m\a\q\x ]] 00:28:54.674 13:52:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:54.674 13:52:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:54.932 [2024-07-10 13:52:34.049147] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:54.932 [2024-07-10 13:52:34.049285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139401 ] 00:28:54.932 [2024-07-10 13:52:34.205758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.191 [2024-07-10 13:52:34.419910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.829  Copying: 512/512 [B] (average 250 kBps) 00:28:56.829 00:28:56.829 13:52:36 -- dd/posix.sh@93 -- # [[ d5do1i0ta0zk4e1mxbvg6ux307xzlt5zcnx1eicdx0yi56kwzrp29fxr3shjgdi1lmsuinmr2hbhoe3gouw24du9kgrrqdyincfu721v921of66llkeytow7rdi10cy4yrnnfk4g0zvyhbwj6k5b92x9h4ea62kbuztkts2di52670ty9qmwnololcv7h768vqssftxcrct1oliiiqms02hitq362sk03dq5c6wg9ki65l972hlwckxmxv5vhghnbplex5gwwv2hfysna24ma1hzh3wq87ec452wapbybhqtzmlvg3lk11ymenhzzq0x75jpt3gjnprtaf9jwwuxarwn8pk7p3e0w1tr7msrl79btuhh7qpt002awwdkh6p0j73yihk1yq5at6y98tm93hc51ce5634tu3kkeefa4ce76oiwc25k1guhvhml5617wu5ak737azv6jkpsh415qaq4klfrfzbmwoydlyqx6n4d8qeoqe8bljd9mkuzmaqx == \d\5\d\o\1\i\0\t\a\0\z\k\4\e\1\m\x\b\v\g\6\u\x\3\0\7\x\z\l\t\5\z\c\n\x\1\e\i\c\d\x\0\y\i\5\6\k\w\z\r\p\2\9\f\x\r\3\s\h\j\g\d\i\1\l\m\s\u\i\n\m\r\2\h\b\h\o\e\3\g\o\u\w\2\4\d\u\9\k\g\r\r\q\d\y\i\n\c\f\u\7\2\1\v\9\2\1\o\f\6\6\l\l\k\e\y\t\o\w\7\r\d\i\1\0\c\y\4\y\r\n\n\f\k\4\g\0\z\v\y\h\b\w\j\6\k\5\b\9\2\x\9\h\4\e\a\6\2\k\b\u\z\t\k\t\s\2\d\i\5\2\6\7\0\t\y\9\q\m\w\n\o\l\o\l\c\v\7\h\7\6\8\v\q\s\s\f\t\x\c\r\c\t\1\o\l\i\i\i\q\m\s\0\2\h\i\t\q\3\6\2\s\k\0\3\d\q\5\c\6\w\g\9\k\i\6\5\l\9\7\2\h\l\w\c\k\x\m\x\v\5\v\h\g\h\n\b\p\l\e\x\5\g\w\w\v\2\h\f\y\s\n\a\2\4\m\a\1\h\z\h\3\w\q\8\7\e\c\4\5\2\w\a\p\b\y\b\h\q\t\z\m\l\v\g\3\l\k\1\1\y\m\e\n\h\z\z\q\0\x\7\5\j\p\t\3\g\j\n\p\r\t\a\f\9\j\w\w\u\x\a\r\w\n\8\p\k\7\p\3\e\0\w\1\t\r\7\m\s\r\l\7\9\b\t\u\h\h\7\q\p\t\0\0\2\a\w\w\d\k\h\6\p\0\j\7\3\y\i\h\k\1\y\q\5\a\t\6\y\9\8\t\m\9\3\h\c\5\1\c\e\5\6\3\4\t\u\3\k\k\e\e\f\a\4\c\e\7\6\o\i\w\c\2\5\k\1\g\u\h\v\h\m\l\5\6\1\7\w\u\5\a\k\7\3\7\a\z\v\6\j\k\p\s\h\4\1\5\q\a\q\4\k\l\f\r\f\z\b\m\w\o\y\d\l\y\q\x\6\n\4\d\8\q\e\o\q\e\8\b\l\j\d\9\m\k\u\z\m\a\q\x ]] 00:28:56.829 13:52:36 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:56.829 13:52:36 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:56.829 13:52:36 -- dd/common.sh@98 -- # xtrace_disable 00:28:56.829 13:52:36 -- common/autotest_common.sh@10 -- # set +x 00:28:57.090 13:52:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:57.090 13:52:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:57.090 [2024-07-10 13:52:36.241852] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:57.090 [2024-07-10 13:52:36.241990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139437 ] 00:28:57.090 [2024-07-10 13:52:36.399889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.351 [2024-07-10 13:52:36.621428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.323  Copying: 512/512 [B] (average 500 kBps) 00:28:59.323 00:28:59.323 13:52:38 -- dd/posix.sh@93 -- # [[ w7x6g8vr0l5h0v01gc2i6z04lw0lqux70dtrc3407acj00w102lny2yvoud2p07yifes2a4xcz63e99uthzkx7ngktvcbg3q0ns6gy0pnkwckqfpe38lrr5516cbl00tprzcnozshzc5129obsmowfpxc3dvazxrss5wovjiwz0sumsxdnwxxfxjql4410nvpkk1xxg1k9k8ftcj1748zuo0cio39v0ca1v9vjqx5a2rzgh20coyd6d1ydfodv5f414wzbc1rg9fjgssqgjoot8su4dzijxzk04obxdca7mjyr9maevx4lgwdhdddzyu2090zw3gcb9wl5ehlp0vym8fht4iqk14dpqvkdxs8zw3fnzc243gashwh4j7y8zflm4pu055opbgop0b6ettoqoqgh3wd3h4quz38w1773h8bt60rgws01diy7d09istcfprmc2xlb1r052dow9m77nhjr1t295jnn3w9e5u5jr0uhbv54glia7w6ngko9j3 == \w\7\x\6\g\8\v\r\0\l\5\h\0\v\0\1\g\c\2\i\6\z\0\4\l\w\0\l\q\u\x\7\0\d\t\r\c\3\4\0\7\a\c\j\0\0\w\1\0\2\l\n\y\2\y\v\o\u\d\2\p\0\7\y\i\f\e\s\2\a\4\x\c\z\6\3\e\9\9\u\t\h\z\k\x\7\n\g\k\t\v\c\b\g\3\q\0\n\s\6\g\y\0\p\n\k\w\c\k\q\f\p\e\3\8\l\r\r\5\5\1\6\c\b\l\0\0\t\p\r\z\c\n\o\z\s\h\z\c\5\1\2\9\o\b\s\m\o\w\f\p\x\c\3\d\v\a\z\x\r\s\s\5\w\o\v\j\i\w\z\0\s\u\m\s\x\d\n\w\x\x\f\x\j\q\l\4\4\1\0\n\v\p\k\k\1\x\x\g\1\k\9\k\8\f\t\c\j\1\7\4\8\z\u\o\0\c\i\o\3\9\v\0\c\a\1\v\9\v\j\q\x\5\a\2\r\z\g\h\2\0\c\o\y\d\6\d\1\y\d\f\o\d\v\5\f\4\1\4\w\z\b\c\1\r\g\9\f\j\g\s\s\q\g\j\o\o\t\8\s\u\4\d\z\i\j\x\z\k\0\4\o\b\x\d\c\a\7\m\j\y\r\9\m\a\e\v\x\4\l\g\w\d\h\d\d\d\z\y\u\2\0\9\0\z\w\3\g\c\b\9\w\l\5\e\h\l\p\0\v\y\m\8\f\h\t\4\i\q\k\1\4\d\p\q\v\k\d\x\s\8\z\w\3\f\n\z\c\2\4\3\g\a\s\h\w\h\4\j\7\y\8\z\f\l\m\4\p\u\0\5\5\o\p\b\g\o\p\0\b\6\e\t\t\o\q\o\q\g\h\3\w\d\3\h\4\q\u\z\3\8\w\1\7\7\3\h\8\b\t\6\0\r\g\w\s\0\1\d\i\y\7\d\0\9\i\s\t\c\f\p\r\m\c\2\x\l\b\1\r\0\5\2\d\o\w\9\m\7\7\n\h\j\r\1\t\2\9\5\j\n\n\3\w\9\e\5\u\5\j\r\0\u\h\b\v\5\4\g\l\i\a\7\w\6\n\g\k\o\9\j\3 ]] 00:28:59.323 13:52:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:59.323 13:52:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:59.323 [2024-07-10 13:52:38.442059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:59.323 [2024-07-10 13:52:38.442213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139466 ] 00:28:59.323 [2024-07-10 13:52:38.600950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.582 [2024-07-10 13:52:38.821198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.220  Copying: 512/512 [B] (average 500 kBps) 00:29:01.221 00:29:01.221 13:52:40 -- dd/posix.sh@93 -- # [[ w7x6g8vr0l5h0v01gc2i6z04lw0lqux70dtrc3407acj00w102lny2yvoud2p07yifes2a4xcz63e99uthzkx7ngktvcbg3q0ns6gy0pnkwckqfpe38lrr5516cbl00tprzcnozshzc5129obsmowfpxc3dvazxrss5wovjiwz0sumsxdnwxxfxjql4410nvpkk1xxg1k9k8ftcj1748zuo0cio39v0ca1v9vjqx5a2rzgh20coyd6d1ydfodv5f414wzbc1rg9fjgssqgjoot8su4dzijxzk04obxdca7mjyr9maevx4lgwdhdddzyu2090zw3gcb9wl5ehlp0vym8fht4iqk14dpqvkdxs8zw3fnzc243gashwh4j7y8zflm4pu055opbgop0b6ettoqoqgh3wd3h4quz38w1773h8bt60rgws01diy7d09istcfprmc2xlb1r052dow9m77nhjr1t295jnn3w9e5u5jr0uhbv54glia7w6ngko9j3 == \w\7\x\6\g\8\v\r\0\l\5\h\0\v\0\1\g\c\2\i\6\z\0\4\l\w\0\l\q\u\x\7\0\d\t\r\c\3\4\0\7\a\c\j\0\0\w\1\0\2\l\n\y\2\y\v\o\u\d\2\p\0\7\y\i\f\e\s\2\a\4\x\c\z\6\3\e\9\9\u\t\h\z\k\x\7\n\g\k\t\v\c\b\g\3\q\0\n\s\6\g\y\0\p\n\k\w\c\k\q\f\p\e\3\8\l\r\r\5\5\1\6\c\b\l\0\0\t\p\r\z\c\n\o\z\s\h\z\c\5\1\2\9\o\b\s\m\o\w\f\p\x\c\3\d\v\a\z\x\r\s\s\5\w\o\v\j\i\w\z\0\s\u\m\s\x\d\n\w\x\x\f\x\j\q\l\4\4\1\0\n\v\p\k\k\1\x\x\g\1\k\9\k\8\f\t\c\j\1\7\4\8\z\u\o\0\c\i\o\3\9\v\0\c\a\1\v\9\v\j\q\x\5\a\2\r\z\g\h\2\0\c\o\y\d\6\d\1\y\d\f\o\d\v\5\f\4\1\4\w\z\b\c\1\r\g\9\f\j\g\s\s\q\g\j\o\o\t\8\s\u\4\d\z\i\j\x\z\k\0\4\o\b\x\d\c\a\7\m\j\y\r\9\m\a\e\v\x\4\l\g\w\d\h\d\d\d\z\y\u\2\0\9\0\z\w\3\g\c\b\9\w\l\5\e\h\l\p\0\v\y\m\8\f\h\t\4\i\q\k\1\4\d\p\q\v\k\d\x\s\8\z\w\3\f\n\z\c\2\4\3\g\a\s\h\w\h\4\j\7\y\8\z\f\l\m\4\p\u\0\5\5\o\p\b\g\o\p\0\b\6\e\t\t\o\q\o\q\g\h\3\w\d\3\h\4\q\u\z\3\8\w\1\7\7\3\h\8\b\t\6\0\r\g\w\s\0\1\d\i\y\7\d\0\9\i\s\t\c\f\p\r\m\c\2\x\l\b\1\r\0\5\2\d\o\w\9\m\7\7\n\h\j\r\1\t\2\9\5\j\n\n\3\w\9\e\5\u\5\j\r\0\u\h\b\v\5\4\g\l\i\a\7\w\6\n\g\k\o\9\j\3 ]] 00:29:01.221 13:52:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:01.221 13:52:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:29:01.481 [2024-07-10 13:52:40.623954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:01.481 [2024-07-10 13:52:40.624750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139495 ] 00:29:01.481 [2024-07-10 13:52:40.782491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.741 [2024-07-10 13:52:41.002865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.688  Copying: 512/512 [B] (average 83 kBps) 00:29:03.688 00:29:03.688 13:52:42 -- dd/posix.sh@93 -- # [[ w7x6g8vr0l5h0v01gc2i6z04lw0lqux70dtrc3407acj00w102lny2yvoud2p07yifes2a4xcz63e99uthzkx7ngktvcbg3q0ns6gy0pnkwckqfpe38lrr5516cbl00tprzcnozshzc5129obsmowfpxc3dvazxrss5wovjiwz0sumsxdnwxxfxjql4410nvpkk1xxg1k9k8ftcj1748zuo0cio39v0ca1v9vjqx5a2rzgh20coyd6d1ydfodv5f414wzbc1rg9fjgssqgjoot8su4dzijxzk04obxdca7mjyr9maevx4lgwdhdddzyu2090zw3gcb9wl5ehlp0vym8fht4iqk14dpqvkdxs8zw3fnzc243gashwh4j7y8zflm4pu055opbgop0b6ettoqoqgh3wd3h4quz38w1773h8bt60rgws01diy7d09istcfprmc2xlb1r052dow9m77nhjr1t295jnn3w9e5u5jr0uhbv54glia7w6ngko9j3 == \w\7\x\6\g\8\v\r\0\l\5\h\0\v\0\1\g\c\2\i\6\z\0\4\l\w\0\l\q\u\x\7\0\d\t\r\c\3\4\0\7\a\c\j\0\0\w\1\0\2\l\n\y\2\y\v\o\u\d\2\p\0\7\y\i\f\e\s\2\a\4\x\c\z\6\3\e\9\9\u\t\h\z\k\x\7\n\g\k\t\v\c\b\g\3\q\0\n\s\6\g\y\0\p\n\k\w\c\k\q\f\p\e\3\8\l\r\r\5\5\1\6\c\b\l\0\0\t\p\r\z\c\n\o\z\s\h\z\c\5\1\2\9\o\b\s\m\o\w\f\p\x\c\3\d\v\a\z\x\r\s\s\5\w\o\v\j\i\w\z\0\s\u\m\s\x\d\n\w\x\x\f\x\j\q\l\4\4\1\0\n\v\p\k\k\1\x\x\g\1\k\9\k\8\f\t\c\j\1\7\4\8\z\u\o\0\c\i\o\3\9\v\0\c\a\1\v\9\v\j\q\x\5\a\2\r\z\g\h\2\0\c\o\y\d\6\d\1\y\d\f\o\d\v\5\f\4\1\4\w\z\b\c\1\r\g\9\f\j\g\s\s\q\g\j\o\o\t\8\s\u\4\d\z\i\j\x\z\k\0\4\o\b\x\d\c\a\7\m\j\y\r\9\m\a\e\v\x\4\l\g\w\d\h\d\d\d\z\y\u\2\0\9\0\z\w\3\g\c\b\9\w\l\5\e\h\l\p\0\v\y\m\8\f\h\t\4\i\q\k\1\4\d\p\q\v\k\d\x\s\8\z\w\3\f\n\z\c\2\4\3\g\a\s\h\w\h\4\j\7\y\8\z\f\l\m\4\p\u\0\5\5\o\p\b\g\o\p\0\b\6\e\t\t\o\q\o\q\g\h\3\w\d\3\h\4\q\u\z\3\8\w\1\7\7\3\h\8\b\t\6\0\r\g\w\s\0\1\d\i\y\7\d\0\9\i\s\t\c\f\p\r\m\c\2\x\l\b\1\r\0\5\2\d\o\w\9\m\7\7\n\h\j\r\1\t\2\9\5\j\n\n\3\w\9\e\5\u\5\j\r\0\u\h\b\v\5\4\g\l\i\a\7\w\6\n\g\k\o\9\j\3 ]] 00:29:03.688 13:52:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:03.689 13:52:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:29:03.689 [2024-07-10 13:52:42.825202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:03.689 [2024-07-10 13:52:42.825344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139542 ] 00:29:03.689 [2024-07-10 13:52:42.982398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.951 [2024-07-10 13:52:43.193336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.590  Copying: 512/512 [B] (average 250 kBps) 00:29:05.590 00:29:05.850 ************************************ 00:29:05.850 END TEST dd_flags_misc_forced_aio 00:29:05.850 ************************************ 00:29:05.850 13:52:44 -- dd/posix.sh@93 -- # [[ w7x6g8vr0l5h0v01gc2i6z04lw0lqux70dtrc3407acj00w102lny2yvoud2p07yifes2a4xcz63e99uthzkx7ngktvcbg3q0ns6gy0pnkwckqfpe38lrr5516cbl00tprzcnozshzc5129obsmowfpxc3dvazxrss5wovjiwz0sumsxdnwxxfxjql4410nvpkk1xxg1k9k8ftcj1748zuo0cio39v0ca1v9vjqx5a2rzgh20coyd6d1ydfodv5f414wzbc1rg9fjgssqgjoot8su4dzijxzk04obxdca7mjyr9maevx4lgwdhdddzyu2090zw3gcb9wl5ehlp0vym8fht4iqk14dpqvkdxs8zw3fnzc243gashwh4j7y8zflm4pu055opbgop0b6ettoqoqgh3wd3h4quz38w1773h8bt60rgws01diy7d09istcfprmc2xlb1r052dow9m77nhjr1t295jnn3w9e5u5jr0uhbv54glia7w6ngko9j3 == \w\7\x\6\g\8\v\r\0\l\5\h\0\v\0\1\g\c\2\i\6\z\0\4\l\w\0\l\q\u\x\7\0\d\t\r\c\3\4\0\7\a\c\j\0\0\w\1\0\2\l\n\y\2\y\v\o\u\d\2\p\0\7\y\i\f\e\s\2\a\4\x\c\z\6\3\e\9\9\u\t\h\z\k\x\7\n\g\k\t\v\c\b\g\3\q\0\n\s\6\g\y\0\p\n\k\w\c\k\q\f\p\e\3\8\l\r\r\5\5\1\6\c\b\l\0\0\t\p\r\z\c\n\o\z\s\h\z\c\5\1\2\9\o\b\s\m\o\w\f\p\x\c\3\d\v\a\z\x\r\s\s\5\w\o\v\j\i\w\z\0\s\u\m\s\x\d\n\w\x\x\f\x\j\q\l\4\4\1\0\n\v\p\k\k\1\x\x\g\1\k\9\k\8\f\t\c\j\1\7\4\8\z\u\o\0\c\i\o\3\9\v\0\c\a\1\v\9\v\j\q\x\5\a\2\r\z\g\h\2\0\c\o\y\d\6\d\1\y\d\f\o\d\v\5\f\4\1\4\w\z\b\c\1\r\g\9\f\j\g\s\s\q\g\j\o\o\t\8\s\u\4\d\z\i\j\x\z\k\0\4\o\b\x\d\c\a\7\m\j\y\r\9\m\a\e\v\x\4\l\g\w\d\h\d\d\d\z\y\u\2\0\9\0\z\w\3\g\c\b\9\w\l\5\e\h\l\p\0\v\y\m\8\f\h\t\4\i\q\k\1\4\d\p\q\v\k\d\x\s\8\z\w\3\f\n\z\c\2\4\3\g\a\s\h\w\h\4\j\7\y\8\z\f\l\m\4\p\u\0\5\5\o\p\b\g\o\p\0\b\6\e\t\t\o\q\o\q\g\h\3\w\d\3\h\4\q\u\z\3\8\w\1\7\7\3\h\8\b\t\6\0\r\g\w\s\0\1\d\i\y\7\d\0\9\i\s\t\c\f\p\r\m\c\2\x\l\b\1\r\0\5\2\d\o\w\9\m\7\7\n\h\j\r\1\t\2\9\5\j\n\n\3\w\9\e\5\u\5\j\r\0\u\h\b\v\5\4\g\l\i\a\7\w\6\n\g\k\o\9\j\3 ]] 00:29:05.850 00:29:05.850 real 0m17.422s 00:29:05.850 user 0m14.742s 00:29:05.850 sys 0m1.608s 00:29:05.850 13:52:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.850 13:52:44 -- common/autotest_common.sh@10 -- # set +x 00:29:05.850 13:52:44 -- dd/posix.sh@1 -- # cleanup 00:29:05.850 13:52:44 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:05.850 13:52:44 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:05.850 ************************************ 00:29:05.850 END TEST spdk_dd_posix 00:29:05.850 ************************************ 00:29:05.850 00:29:05.850 real 1m11.950s 00:29:05.850 user 0m59.082s 00:29:05.850 sys 0m6.856s 00:29:05.850 13:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.850 13:52:45 -- common/autotest_common.sh@10 -- # set +x 00:29:05.850 13:52:45 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:29:05.850 13:52:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.850 13:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.850 13:52:45 -- common/autotest_common.sh@10 -- # set +x 00:29:05.850 ************************************ 00:29:05.850 START TEST spdk_dd_malloc 00:29:05.850 ************************************ 00:29:05.850 13:52:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:29:05.850 * Looking for test storage... 00:29:05.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:05.850 13:52:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:05.850 13:52:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.850 13:52:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.850 13:52:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.850 13:52:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.850 13:52:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.850 13:52:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.850 13:52:45 -- paths/export.sh@5 -- # export PATH 00:29:05.851 13:52:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:05.851 13:52:45 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:29:05.851 13:52:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.851 13:52:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.851 13:52:45 -- common/autotest_common.sh@10 -- # set +x 00:29:05.851 ************************************ 00:29:05.851 START TEST dd_malloc_copy 00:29:05.851 ************************************ 00:29:06.110 13:52:45 -- common/autotest_common.sh@1104 -- # malloc_copy 00:29:06.110 13:52:45 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:29:06.110 13:52:45 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:29:06.110 13:52:45 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:29:06.110 13:52:45 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:29:06.110 13:52:45 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:29:06.110 13:52:45 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:29:06.110 13:52:45 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:29:06.110 13:52:45 -- dd/malloc.sh@28 -- # gen_conf 00:29:06.110 13:52:45 -- dd/common.sh@31 -- # xtrace_disable 00:29:06.110 13:52:45 -- common/autotest_common.sh@10 -- # set +x 00:29:06.110 [2024-07-10 13:52:45.265469] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:06.110 [2024-07-10 13:52:45.265655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139632 ] 00:29:06.110 { 00:29:06.110 "subsystems": [ 00:29:06.110 { 00:29:06.110 "subsystem": "bdev", 00:29:06.110 "config": [ 00:29:06.110 { 00:29:06.110 "params": { 00:29:06.110 "num_blocks": 1048576, 00:29:06.110 "block_size": 512, 00:29:06.110 "name": "malloc0" 00:29:06.110 }, 00:29:06.110 "method": "bdev_malloc_create" 00:29:06.110 }, 00:29:06.110 { 00:29:06.110 "params": { 00:29:06.110 "num_blocks": 1048576, 00:29:06.110 "block_size": 512, 00:29:06.110 "name": "malloc1" 00:29:06.110 }, 00:29:06.110 "method": "bdev_malloc_create" 00:29:06.110 }, 00:29:06.110 { 00:29:06.110 "method": "bdev_wait_for_examine" 00:29:06.110 } 00:29:06.110 ] 00:29:06.110 } 00:29:06.110 ] 00:29:06.110 } 00:29:06.110 [2024-07-10 13:52:45.430964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.369 [2024-07-10 13:52:45.646649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.491  Copying: 233/512 [MB] (233 MBps) Copying: 471/512 [MB] (237 MBps) Copying: 512/512 [MB] (average 236 MBps) 00:29:15.491 00:29:15.491 13:52:53 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:29:15.491 13:52:53 -- dd/malloc.sh@33 -- # gen_conf 00:29:15.491 13:52:53 -- dd/common.sh@31 -- # xtrace_disable 00:29:15.491 13:52:53 -- common/autotest_common.sh@10 -- # set +x 00:29:15.491 [2024-07-10 13:52:53.976889] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:15.491 [2024-07-10 13:52:53.977013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139772 ] 00:29:15.491 { 00:29:15.491 "subsystems": [ 00:29:15.491 { 00:29:15.491 "subsystem": "bdev", 00:29:15.491 "config": [ 00:29:15.491 { 00:29:15.491 "params": { 00:29:15.491 "num_blocks": 1048576, 00:29:15.491 "block_size": 512, 00:29:15.491 "name": "malloc0" 00:29:15.491 }, 00:29:15.491 "method": "bdev_malloc_create" 00:29:15.491 }, 00:29:15.491 { 00:29:15.491 "params": { 00:29:15.491 "num_blocks": 1048576, 00:29:15.491 "block_size": 512, 00:29:15.491 "name": "malloc1" 00:29:15.491 }, 00:29:15.491 "method": "bdev_malloc_create" 00:29:15.491 }, 00:29:15.491 { 00:29:15.491 "method": "bdev_wait_for_examine" 00:29:15.491 } 00:29:15.491 ] 00:29:15.491 } 00:29:15.491 ] 00:29:15.491 } 00:29:15.491 [2024-07-10 13:52:54.135128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.491 [2024-07-10 13:52:54.353129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.236  Copying: 236/512 [MB] (236 MBps) Copying: 465/512 [MB] (229 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:29:24.236 00:29:24.236 00:29:24.236 real 0m17.724s 00:29:24.236 user 0m16.692s 00:29:24.236 sys 0m0.913s 00:29:24.236 13:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.236 13:53:02 -- common/autotest_common.sh@10 -- # set +x 00:29:24.236 ************************************ 00:29:24.236 END TEST dd_malloc_copy 00:29:24.236 ************************************ 00:29:24.236 ************************************ 00:29:24.236 END TEST spdk_dd_malloc 00:29:24.236 ************************************ 00:29:24.236 00:29:24.236 real 0m17.903s 00:29:24.236 user 0m16.797s 00:29:24.236 sys 0m0.994s 00:29:24.236 13:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.236 13:53:02 -- common/autotest_common.sh@10 -- # set +x 00:29:24.236 13:53:03 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:29:24.236 13:53:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:24.236 13:53:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.236 13:53:03 -- common/autotest_common.sh@10 -- # set +x 00:29:24.236 ************************************ 00:29:24.236 START TEST spdk_dd_bdev_to_bdev 00:29:24.236 ************************************ 00:29:24.236 13:53:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:29:24.236 * Looking for test storage... 00:29:24.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:24.236 13:53:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.236 13:53:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.236 13:53:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.236 13:53:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.236 13:53:03 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.236 13:53:03 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.236 13:53:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.236 13:53:03 -- paths/export.sh@5 -- # export PATH 00:29:24.236 13:53:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:29:24.236 13:53:03 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:29:24.236 [2024-07-10 13:53:03.193720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:24.236 [2024-07-10 13:53:03.193883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139956 ] 00:29:24.236 [2024-07-10 13:53:03.355521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.496 [2024-07-10 13:53:03.593143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.437  Copying: 256/256 [MB] (average 1430 MBps) 00:29:26.437 00:29:26.437 13:53:05 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:26.437 13:53:05 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:26.437 13:53:05 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:29:26.437 13:53:05 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:29:26.437 13:53:05 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:26.437 13:53:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:26.437 13:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.437 13:53:05 -- common/autotest_common.sh@10 -- # set +x 00:29:26.437 ************************************ 00:29:26.437 START TEST dd_inflate_file 00:29:26.437 ************************************ 00:29:26.437 13:53:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:29:26.437 [2024-07-10 13:53:05.782540] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:26.437 [2024-07-10 13:53:05.783059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139991 ] 00:29:26.696 [2024-07-10 13:53:05.946240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.954 [2024-07-10 13:53:06.180747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.905  Copying: 64/64 [MB] (average 1163 MBps) 00:29:28.905 00:29:28.905 ************************************ 00:29:28.905 END TEST dd_inflate_file 00:29:28.905 ************************************ 00:29:28.905 00:29:28.905 real 0m2.471s 00:29:28.905 user 0m2.072s 00:29:28.905 sys 0m0.269s 00:29:28.905 13:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.905 13:53:08 -- common/autotest_common.sh@10 -- # set +x 00:29:28.905 13:53:08 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:29:28.905 13:53:08 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:29:28.905 13:53:08 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:28.905 13:53:08 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:29:28.905 13:53:08 -- dd/common.sh@31 -- # xtrace_disable 00:29:28.905 13:53:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:28.905 13:53:08 -- common/autotest_common.sh@10 -- # set +x 00:29:28.905 13:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:28.905 13:53:08 -- common/autotest_common.sh@10 -- # set +x 00:29:29.164 ************************************ 00:29:29.164 START TEST dd_copy_to_out_bdev 00:29:29.164 ************************************ 00:29:29.164 13:53:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:29:29.164 [2024-07-10 13:53:08.310205] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:29.164 [2024-07-10 13:53:08.310358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140055 ] 00:29:29.164 { 00:29:29.164 "subsystems": [ 00:29:29.165 { 00:29:29.165 "subsystem": "bdev", 00:29:29.165 "config": [ 00:29:29.165 { 00:29:29.165 "params": { 00:29:29.165 "block_size": 4096, 00:29:29.165 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:29.165 "name": "aio1" 00:29:29.165 }, 00:29:29.165 "method": "bdev_aio_create" 00:29:29.165 }, 00:29:29.165 { 00:29:29.165 "params": { 00:29:29.165 "trtype": "pcie", 00:29:29.165 "traddr": "0000:00:06.0", 00:29:29.165 "name": "Nvme0" 00:29:29.165 }, 00:29:29.165 "method": "bdev_nvme_attach_controller" 00:29:29.165 }, 00:29:29.165 { 00:29:29.165 "method": "bdev_wait_for_examine" 00:29:29.165 } 00:29:29.165 ] 00:29:29.165 } 00:29:29.165 ] 00:29:29.165 } 00:29:29.165 [2024-07-10 13:53:08.473605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.424 [2024-07-10 13:53:08.720614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.717  Copying: 64/64 [MB] (average 74 MBps) 00:29:32.717 00:29:32.717 00:29:32.717 real 0m3.417s 00:29:32.717 user 0m3.043s 00:29:32.717 sys 0m0.293s 00:29:32.717 13:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.717 13:53:11 -- common/autotest_common.sh@10 -- # set +x 00:29:32.717 ************************************ 00:29:32.717 END TEST dd_copy_to_out_bdev 00:29:32.717 ************************************ 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:29:32.717 13:53:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:32.717 13:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:32.717 13:53:11 -- common/autotest_common.sh@10 -- # set +x 00:29:32.717 ************************************ 00:29:32.717 START TEST dd_offset_magic 00:29:32.717 ************************************ 00:29:32.717 13:53:11 -- common/autotest_common.sh@1104 -- # offset_magic 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:29:32.717 13:53:11 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:32.717 13:53:11 -- dd/common.sh@31 -- # xtrace_disable 00:29:32.717 13:53:11 -- common/autotest_common.sh@10 -- # set +x 00:29:32.717 [2024-07-10 13:53:11.802797] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:32.717 [2024-07-10 13:53:11.802938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140120 ] 00:29:32.717 { 00:29:32.717 "subsystems": [ 00:29:32.717 { 00:29:32.717 "subsystem": "bdev", 00:29:32.717 "config": [ 00:29:32.717 { 00:29:32.717 "params": { 00:29:32.717 "block_size": 4096, 00:29:32.717 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:32.717 "name": "aio1" 00:29:32.717 }, 00:29:32.717 "method": "bdev_aio_create" 00:29:32.717 }, 00:29:32.717 { 00:29:32.717 "params": { 00:29:32.717 "trtype": "pcie", 00:29:32.717 "traddr": "0000:00:06.0", 00:29:32.717 "name": "Nvme0" 00:29:32.717 }, 00:29:32.717 "method": "bdev_nvme_attach_controller" 00:29:32.717 }, 00:29:32.717 { 00:29:32.717 "method": "bdev_wait_for_examine" 00:29:32.717 } 00:29:32.717 ] 00:29:32.717 } 00:29:32.717 ] 00:29:32.717 } 00:29:32.717 [2024-07-10 13:53:11.963332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.977 [2024-07-10 13:53:12.190856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.292  Copying: 65/65 [MB] (average 187 MBps) 00:29:35.292 00:29:35.292 13:53:14 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:29:35.292 13:53:14 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:35.292 13:53:14 -- dd/common.sh@31 -- # xtrace_disable 00:29:35.292 13:53:14 -- common/autotest_common.sh@10 -- # set +x 00:29:35.292 [2024-07-10 13:53:14.527179] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:35.292 [2024-07-10 13:53:14.527304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140180 ] 00:29:35.292 { 00:29:35.292 "subsystems": [ 00:29:35.292 { 00:29:35.292 "subsystem": "bdev", 00:29:35.292 "config": [ 00:29:35.292 { 00:29:35.292 "params": { 00:29:35.292 "block_size": 4096, 00:29:35.292 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:35.292 "name": "aio1" 00:29:35.292 }, 00:29:35.292 "method": "bdev_aio_create" 00:29:35.292 }, 00:29:35.292 { 00:29:35.292 "params": { 00:29:35.292 "trtype": "pcie", 00:29:35.292 "traddr": "0000:00:06.0", 00:29:35.292 "name": "Nvme0" 00:29:35.292 }, 00:29:35.292 "method": "bdev_nvme_attach_controller" 00:29:35.292 }, 00:29:35.292 { 00:29:35.292 "method": "bdev_wait_for_examine" 00:29:35.292 } 00:29:35.292 ] 00:29:35.292 } 00:29:35.292 ] 00:29:35.292 } 00:29:35.551 [2024-07-10 13:53:14.685643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.810 [2024-07-10 13:53:14.910432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.976  Copying: 1024/1024 [kB] (average 1000 MBps) 00:29:37.976 00:29:37.976 13:53:16 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:37.976 13:53:16 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:37.976 13:53:16 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:29:37.976 13:53:16 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:29:37.976 13:53:16 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:29:37.976 13:53:16 -- dd/common.sh@31 -- # xtrace_disable 00:29:37.976 13:53:16 -- common/autotest_common.sh@10 -- # set +x 00:29:37.976 [2024-07-10 13:53:16.971876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:37.976 [2024-07-10 13:53:16.972032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140214 ] 00:29:37.976 { 00:29:37.976 "subsystems": [ 00:29:37.976 { 00:29:37.976 "subsystem": "bdev", 00:29:37.976 "config": [ 00:29:37.976 { 00:29:37.976 "params": { 00:29:37.976 "block_size": 4096, 00:29:37.976 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:37.976 "name": "aio1" 00:29:37.976 }, 00:29:37.976 "method": "bdev_aio_create" 00:29:37.976 }, 00:29:37.976 { 00:29:37.976 "params": { 00:29:37.976 "trtype": "pcie", 00:29:37.976 "traddr": "0000:00:06.0", 00:29:37.976 "name": "Nvme0" 00:29:37.976 }, 00:29:37.976 "method": "bdev_nvme_attach_controller" 00:29:37.976 }, 00:29:37.976 { 00:29:37.976 "method": "bdev_wait_for_examine" 00:29:37.976 } 00:29:37.976 ] 00:29:37.976 } 00:29:37.976 ] 00:29:37.976 } 00:29:37.976 [2024-07-10 13:53:17.130344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.235 [2024-07-10 13:53:17.353439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.111  Copying: 65/65 [MB] (average 225 MBps) 00:29:40.111 00:29:40.111 13:53:19 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:29:40.111 13:53:19 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:29:40.111 13:53:19 -- dd/common.sh@31 -- # xtrace_disable 00:29:40.111 13:53:19 -- common/autotest_common.sh@10 -- # set +x 00:29:40.369 [2024-07-10 13:53:19.509641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:40.369 [2024-07-10 13:53:19.509804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140255 ] 00:29:40.369 { 00:29:40.369 "subsystems": [ 00:29:40.369 { 00:29:40.369 "subsystem": "bdev", 00:29:40.369 "config": [ 00:29:40.369 { 00:29:40.369 "params": { 00:29:40.369 "block_size": 4096, 00:29:40.369 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:40.369 "name": "aio1" 00:29:40.369 }, 00:29:40.369 "method": "bdev_aio_create" 00:29:40.369 }, 00:29:40.369 { 00:29:40.369 "params": { 00:29:40.369 "trtype": "pcie", 00:29:40.369 "traddr": "0000:00:06.0", 00:29:40.369 "name": "Nvme0" 00:29:40.369 }, 00:29:40.369 "method": "bdev_nvme_attach_controller" 00:29:40.369 }, 00:29:40.369 { 00:29:40.369 "method": "bdev_wait_for_examine" 00:29:40.369 } 00:29:40.369 ] 00:29:40.369 } 00:29:40.369 ] 00:29:40.369 } 00:29:40.369 [2024-07-10 13:53:19.673625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.628 [2024-07-10 13:53:19.897971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.601  Copying: 1024/1024 [kB] (average 500 MBps) 00:29:42.601 00:29:42.601 13:53:21 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:29:42.601 13:53:21 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:29:42.601 00:29:42.601 real 0m10.139s 00:29:42.601 user 0m8.239s 00:29:42.601 sys 0m1.073s 00:29:42.601 13:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.601 13:53:21 -- common/autotest_common.sh@10 -- # set +x 00:29:42.601 ************************************ 00:29:42.601 END TEST dd_offset_magic 00:29:42.601 ************************************ 00:29:42.601 13:53:21 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:29:42.601 13:53:21 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:29:42.601 13:53:21 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:29:42.601 13:53:21 -- dd/common.sh@11 -- # local nvme_ref= 00:29:42.601 13:53:21 -- dd/common.sh@12 -- # local size=4194330 00:29:42.601 13:53:21 -- dd/common.sh@14 -- # local bs=1048576 00:29:42.601 13:53:21 -- dd/common.sh@15 -- # local count=5 00:29:42.601 13:53:21 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:29:42.601 13:53:21 -- dd/common.sh@18 -- # gen_conf 00:29:42.601 13:53:21 -- dd/common.sh@31 -- # xtrace_disable 00:29:42.601 13:53:21 -- common/autotest_common.sh@10 -- # set +x 00:29:42.860 [2024-07-10 13:53:21.989319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:42.860 [2024-07-10 13:53:21.989453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140304 ] 00:29:42.860 { 00:29:42.860 "subsystems": [ 00:29:42.860 { 00:29:42.860 "subsystem": "bdev", 00:29:42.860 "config": [ 00:29:42.860 { 00:29:42.860 "params": { 00:29:42.860 "block_size": 4096, 00:29:42.860 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:42.860 "name": "aio1" 00:29:42.860 }, 00:29:42.860 "method": "bdev_aio_create" 00:29:42.860 }, 00:29:42.860 { 00:29:42.860 "params": { 00:29:42.860 "trtype": "pcie", 00:29:42.860 "traddr": "0000:00:06.0", 00:29:42.860 "name": "Nvme0" 00:29:42.860 }, 00:29:42.860 "method": "bdev_nvme_attach_controller" 00:29:42.860 }, 00:29:42.860 { 00:29:42.860 "method": "bdev_wait_for_examine" 00:29:42.860 } 00:29:42.860 ] 00:29:42.860 } 00:29:42.860 ] 00:29:42.860 } 00:29:42.860 [2024-07-10 13:53:22.150126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.118 [2024-07-10 13:53:22.385725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.064  Copying: 5120/5120 [kB] (average 1000 MBps) 00:29:45.064 00:29:45.064 13:53:24 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:29:45.064 13:53:24 -- dd/common.sh@10 -- # local bdev=aio1 00:29:45.064 13:53:24 -- dd/common.sh@11 -- # local nvme_ref= 00:29:45.064 13:53:24 -- dd/common.sh@12 -- # local size=4194330 00:29:45.064 13:53:24 -- dd/common.sh@14 -- # local bs=1048576 00:29:45.064 13:53:24 -- dd/common.sh@15 -- # local count=5 00:29:45.064 13:53:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:29:45.064 13:53:24 -- dd/common.sh@18 -- # gen_conf 00:29:45.064 13:53:24 -- dd/common.sh@31 -- # xtrace_disable 00:29:45.064 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:29:45.064 { 00:29:45.064 "subsystems": [ 00:29:45.064 { 00:29:45.064 "subsystem": "bdev", 00:29:45.064 "config": [ 00:29:45.064 { 00:29:45.064 "params": { 00:29:45.064 "block_size": 4096, 00:29:45.064 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:29:45.064 "name": "aio1" 00:29:45.064 }, 00:29:45.064 "method": "bdev_aio_create" 00:29:45.064 }, 00:29:45.064 { 00:29:45.064 "params": { 00:29:45.064 "trtype": "pcie", 00:29:45.064 "traddr": "0000:00:06.0", 00:29:45.064 "name": "Nvme0" 00:29:45.064 }, 00:29:45.064 "method": "bdev_nvme_attach_controller" 00:29:45.064 }, 00:29:45.064 { 00:29:45.064 "method": "bdev_wait_for_examine" 00:29:45.064 } 00:29:45.064 ] 00:29:45.064 } 00:29:45.064 ] 00:29:45.064 } 00:29:45.064 [2024-07-10 13:53:24.316404] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:45.064 [2024-07-10 13:53:24.316554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140361 ] 00:29:45.323 [2024-07-10 13:53:24.481178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.581 [2024-07-10 13:53:24.724016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.523  Copying: 5120/5120 [kB] (average 263 MBps) 00:29:47.523 00:29:47.523 13:53:26 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:47.780 00:29:47.780 real 0m23.865s 00:29:47.780 user 0m19.777s 00:29:47.780 sys 0m2.673s 00:29:47.780 13:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.780 ************************************ 00:29:47.780 END TEST spdk_dd_bdev_to_bdev 00:29:47.780 ************************************ 00:29:47.780 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:29:47.780 13:53:26 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:29:47.780 13:53:26 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:47.780 13:53:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:47.780 13:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:47.780 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:29:47.780 ************************************ 00:29:47.780 START TEST spdk_dd_sparse 00:29:47.780 ************************************ 00:29:47.780 13:53:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:47.780 * Looking for test storage... 00:29:47.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:47.780 13:53:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:47.780 13:53:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.780 13:53:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.780 13:53:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.780 13:53:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:47.780 13:53:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:47.780 13:53:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:47.781 13:53:27 -- paths/export.sh@5 -- # export PATH 00:29:47.781 13:53:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:47.781 13:53:27 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:29:47.781 13:53:27 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:29:47.781 13:53:27 -- dd/sparse.sh@110 -- # file1=file_zero1 00:29:47.781 13:53:27 -- dd/sparse.sh@111 -- # file2=file_zero2 00:29:47.781 13:53:27 -- dd/sparse.sh@112 -- # file3=file_zero3 00:29:47.781 13:53:27 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:29:47.781 13:53:27 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:29:47.781 13:53:27 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:29:47.781 13:53:27 -- dd/sparse.sh@118 -- # prepare 00:29:47.781 13:53:27 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:29:47.781 13:53:27 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:29:47.781 1+0 records in 00:29:47.781 1+0 records out 00:29:47.781 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0108563 s, 386 MB/s 00:29:47.781 13:53:27 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:29:47.781 1+0 records in 00:29:47.781 1+0 records out 00:29:47.781 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0123821 s, 339 MB/s 00:29:47.781 13:53:27 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:29:47.781 1+0 records in 00:29:47.781 1+0 records out 00:29:47.781 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0114965 s, 365 MB/s 00:29:47.781 13:53:27 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:29:47.781 13:53:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:47.781 13:53:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:47.781 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:29:48.040 ************************************ 00:29:48.040 START TEST dd_sparse_file_to_file 00:29:48.040 ************************************ 00:29:48.040 13:53:27 -- common/autotest_common.sh@1104 -- # file_to_file 00:29:48.040 13:53:27 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:29:48.040 13:53:27 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:29:48.040 13:53:27 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:48.040 13:53:27 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:29:48.040 13:53:27 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:29:48.040 13:53:27 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:29:48.040 13:53:27 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:29:48.040 13:53:27 -- dd/sparse.sh@41 -- # gen_conf 00:29:48.040 13:53:27 -- dd/common.sh@31 -- # xtrace_disable 00:29:48.040 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:29:48.040 [2024-07-10 13:53:27.197832] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:48.040 [2024-07-10 13:53:27.197989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140448 ] 00:29:48.040 { 00:29:48.040 "subsystems": [ 00:29:48.040 { 00:29:48.040 "subsystem": "bdev", 00:29:48.040 "config": [ 00:29:48.040 { 00:29:48.040 "params": { 00:29:48.040 "block_size": 4096, 00:29:48.040 "filename": "dd_sparse_aio_disk", 00:29:48.040 "name": "dd_aio" 00:29:48.040 }, 00:29:48.040 "method": "bdev_aio_create" 00:29:48.040 }, 00:29:48.040 { 00:29:48.040 "params": { 00:29:48.040 "lvs_name": "dd_lvstore", 00:29:48.040 "bdev_name": "dd_aio" 00:29:48.040 }, 00:29:48.040 "method": "bdev_lvol_create_lvstore" 00:29:48.040 }, 00:29:48.040 { 00:29:48.040 "method": "bdev_wait_for_examine" 00:29:48.040 } 00:29:48.040 ] 00:29:48.040 } 00:29:48.040 ] 00:29:48.040 } 00:29:48.040 [2024-07-10 13:53:27.356772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.300 [2024-07-10 13:53:27.582124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.773  Copying: 12/36 [MB] (average 1090 MBps) 00:29:50.773 00:29:50.773 13:53:29 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:29:50.773 13:53:29 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:29:50.773 13:53:29 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:29:50.773 13:53:29 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:29:50.773 13:53:29 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:50.773 13:53:29 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:29:50.773 13:53:29 -- dd/sparse.sh@52 -- # stat1_b=24576 00:29:50.773 13:53:29 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:29:50.773 13:53:29 -- dd/sparse.sh@53 -- # stat2_b=24576 00:29:50.773 13:53:29 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:50.773 00:29:50.773 real 0m2.522s 00:29:50.773 user 0m2.133s 00:29:50.773 sys 0m0.275s 00:29:50.773 13:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.773 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:29:50.773 ************************************ 00:29:50.773 END TEST dd_sparse_file_to_file 00:29:50.773 ************************************ 00:29:50.773 13:53:29 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:29:50.773 13:53:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:50.773 13:53:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.773 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:29:50.773 ************************************ 00:29:50.773 START TEST dd_sparse_file_to_bdev 00:29:50.773 ************************************ 00:29:50.773 13:53:29 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:29:50.773 13:53:29 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:50.773 13:53:29 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:29:50.773 13:53:29 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:29:50.773 13:53:29 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:29:50.773 13:53:29 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:29:50.773 13:53:29 -- dd/sparse.sh@73 -- # gen_conf 00:29:50.773 13:53:29 -- dd/common.sh@31 -- # xtrace_disable 00:29:50.773 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:29:50.773 [2024-07-10 13:53:29.774446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:50.773 [2024-07-10 13:53:29.774588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140520 ] 00:29:50.773 { 00:29:50.773 "subsystems": [ 00:29:50.773 { 00:29:50.773 "subsystem": "bdev", 00:29:50.773 "config": [ 00:29:50.773 { 00:29:50.773 "params": { 00:29:50.773 "block_size": 4096, 00:29:50.773 "filename": "dd_sparse_aio_disk", 00:29:50.773 "name": "dd_aio" 00:29:50.773 }, 00:29:50.773 "method": "bdev_aio_create" 00:29:50.773 }, 00:29:50.773 { 00:29:50.773 "params": { 00:29:50.773 "lvs_name": "dd_lvstore", 00:29:50.773 "thin_provision": true, 00:29:50.773 "lvol_name": "dd_lvol", 00:29:50.773 "size": 37748736 00:29:50.773 }, 00:29:50.773 "method": "bdev_lvol_create" 00:29:50.773 }, 00:29:50.773 { 00:29:50.773 "method": "bdev_wait_for_examine" 00:29:50.773 } 00:29:50.773 ] 00:29:50.773 } 00:29:50.773 ] 00:29:50.773 } 00:29:50.773 [2024-07-10 13:53:29.934583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.032 [2024-07-10 13:53:30.151361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.289 [2024-07-10 13:53:30.534878] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:29:51.289  Copying: 12/36 [MB] (average 480 MBps)[2024-07-10 13:53:30.603934] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:29:53.188 00:29:53.188 00:29:53.188 ************************************ 00:29:53.188 00:29:53.188 real 0m2.430s 00:29:53.188 user 0m2.112s 00:29:53.188 sys 0m0.237s 00:29:53.188 13:53:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.188 13:53:32 -- common/autotest_common.sh@10 -- # set +x 00:29:53.188 END TEST dd_sparse_file_to_bdev 00:29:53.188 ************************************ 00:29:53.188 13:53:32 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:29:53.188 13:53:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:53.188 13:53:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:53.188 13:53:32 -- common/autotest_common.sh@10 -- # set +x 00:29:53.188 ************************************ 00:29:53.188 START TEST dd_sparse_bdev_to_file 00:29:53.188 ************************************ 00:29:53.188 13:53:32 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:29:53.188 13:53:32 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:29:53.188 13:53:32 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:29:53.188 13:53:32 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:29:53.188 13:53:32 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:29:53.188 13:53:32 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:29:53.188 13:53:32 -- dd/sparse.sh@91 -- # gen_conf 00:29:53.188 13:53:32 -- dd/common.sh@31 -- # xtrace_disable 00:29:53.189 13:53:32 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 { 00:29:53.189 "subsystems": [ 00:29:53.189 { 00:29:53.189 "subsystem": "bdev", 00:29:53.189 "config": [ 00:29:53.189 { 00:29:53.189 "params": { 00:29:53.189 "block_size": 4096, 00:29:53.189 "filename": "dd_sparse_aio_disk", 00:29:53.189 "name": "dd_aio" 00:29:53.189 }, 00:29:53.189 "method": "bdev_aio_create" 00:29:53.189 }, 00:29:53.189 { 00:29:53.189 "method": "bdev_wait_for_examine" 00:29:53.189 } 00:29:53.189 ] 00:29:53.189 } 00:29:53.189 ] 00:29:53.189 } 00:29:53.189 [2024-07-10 13:53:32.262916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:53.189 [2024-07-10 13:53:32.263468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140577 ] 00:29:53.189 [2024-07-10 13:53:32.418826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.446 [2024-07-10 13:53:32.653781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.387  Copying: 12/36 [MB] (average 1090 MBps) 00:29:55.387 00:29:55.387 13:53:34 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:55.387 13:53:34 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:55.387 13:53:34 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:55.387 13:53:34 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:55.387 13:53:34 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:55.387 13:53:34 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:55.387 13:53:34 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:55.387 13:53:34 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:55.387 13:53:34 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:55.387 13:53:34 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:55.387 00:29:55.387 real 0m2.489s 00:29:55.387 user 0m2.174s 00:29:55.387 sys 0m0.220s 00:29:55.387 13:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.387 13:53:34 -- common/autotest_common.sh@10 -- # set +x 00:29:55.387 ************************************ 00:29:55.387 END TEST dd_sparse_bdev_to_file 00:29:55.387 ************************************ 00:29:55.645 13:53:34 -- dd/sparse.sh@1 -- # cleanup 00:29:55.645 13:53:34 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:55.645 13:53:34 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:55.645 13:53:34 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:55.645 13:53:34 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:55.645 00:29:55.645 real 0m7.817s 00:29:55.645 user 0m6.593s 00:29:55.645 sys 0m0.947s 00:29:55.645 13:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.645 13:53:34 -- common/autotest_common.sh@10 -- # set +x 00:29:55.645 ************************************ 00:29:55.645 END TEST spdk_dd_sparse 00:29:55.645 ************************************ 00:29:55.645 13:53:34 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:55.645 13:53:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:55.645 13:53:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.645 13:53:34 -- common/autotest_common.sh@10 -- # set +x 00:29:55.645 ************************************ 00:29:55.645 START TEST spdk_dd_negative 00:29:55.645 ************************************ 00:29:55.645 13:53:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:55.645 * Looking for test storage... 00:29:55.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:55.645 13:53:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:55.645 13:53:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.645 13:53:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.645 13:53:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.645 13:53:34 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:55.646 13:53:34 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:55.646 13:53:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:55.646 13:53:34 -- paths/export.sh@5 -- # export PATH 00:29:55.646 13:53:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:55.646 13:53:34 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:55.646 13:53:34 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:55.646 13:53:34 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:55.646 13:53:34 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:55.646 13:53:34 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:55.646 13:53:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:55.646 13:53:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.646 13:53:34 -- common/autotest_common.sh@10 -- # set +x 00:29:55.646 ************************************ 00:29:55.646 START TEST dd_invalid_arguments 00:29:55.646 ************************************ 00:29:55.646 13:53:34 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:29:55.646 13:53:34 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:55.646 13:53:34 -- common/autotest_common.sh@640 -- # local es=0 00:29:55.646 13:53:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:55.646 13:53:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.646 13:53:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:55.646 13:53:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.646 13:53:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:55.646 13:53:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.646 13:53:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:55.646 13:53:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.646 13:53:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:55.646 13:53:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:55.906 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:55.906 options: 00:29:55.906 -c, --config JSON config file (default none) 00:29:55.906 --json JSON config file (default none) 00:29:55.906 --json-ignore-init-errors 00:29:55.906 don't exit on invalid config entry 00:29:55.906 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:55.906 -g, --single-file-segments 00:29:55.906 force creating just one hugetlbfs file 00:29:55.906 -h, --help show this usage 00:29:55.906 -i, --shm-id shared memory ID (optional) 00:29:55.906 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:55.906 --lcores lcore to CPU mapping list. The list is in the format: 00:29:55.906 [<,lcores[@CPUs]>...] 00:29:55.906 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:55.906 Within the group, '-' is used for range separator, 00:29:55.906 ',' is used for single number separator. 00:29:55.906 '( )' can be omitted for single element group, 00:29:55.906 '@' can be omitted if cpus and lcores have the same value 00:29:55.906 -n, --mem-channels channel number of memory channels used for DPDK 00:29:55.906 -p, --main-core main (primary) core for DPDK 00:29:55.906 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:55.906 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:55.906 --disable-cpumask-locks Disable CPU core lock files. 00:29:55.906 --silence-noticelog disable notice level logging to stderr 00:29:55.906 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:55.906 -u, --no-pci disable PCI access 00:29:55.906 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:55.906 --max-delay maximum reactor delay (in microseconds) 00:29:55.906 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:55.906 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:55.906 -R, --huge-unlink unlink huge files after initialization 00:29:55.906 -v, --version print SPDK version 00:29:55.906 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:55.906 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:55.906 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:55.906 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:55.906 Tracepoints vary in size and can use more than one trace entry. 00:29:55.906 --rpcs-allowed comma-separated list of permitted RPCS 00:29:55.906 --env-context Opaque context for use of the env implementation 00:29:55.906 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:55.906 --no-huge run without using hugepages 00:29:55.906 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:55.906 -e, --tpoint-group [:] 00:29:55.906 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:55.906 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:55.906 Groups and /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:55.906 [2024-07-10 13:53:35.019774] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:55.906 masks can be combined (e.g. thread,bdev:0x1). 00:29:55.906 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:55.906 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:55.906 [--------- DD Options ---------] 00:29:55.906 --if Input file. Must specify either --if or --ib. 00:29:55.906 --ib Input bdev. Must specifier either --if or --ib 00:29:55.906 --of Output file. Must specify either --of or --ob. 00:29:55.906 --ob Output bdev. Must specify either --of or --ob. 00:29:55.906 --iflag Input file flags. 00:29:55.906 --oflag Output file flags. 00:29:55.906 --bs I/O unit size (default: 4096) 00:29:55.906 --qd Queue depth (default: 2) 00:29:55.906 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:55.906 --skip Skip this many I/O units at start of input. (default: 0) 00:29:55.906 --seek Skip this many I/O units at start of output. (default: 0) 00:29:55.906 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:55.906 --sparse Enable hole skipping in input target 00:29:55.906 Available iflag and oflag values: 00:29:55.906 append - append mode 00:29:55.906 direct - use direct I/O for data 00:29:55.906 directory - fail unless a directory 00:29:55.906 dsync - use synchronized I/O for data 00:29:55.906 noatime - do not update access time 00:29:55.906 noctty - do not assign controlling terminal from file 00:29:55.906 nofollow - do not follow symlinks 00:29:55.906 nonblock - use non-blocking I/O 00:29:55.906 sync - use synchronized I/O for data and metadata 00:29:55.907 13:53:35 -- common/autotest_common.sh@643 -- # es=2 00:29:55.907 13:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:55.907 13:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:55.907 13:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:55.907 00:29:55.907 real 0m0.117s 00:29:55.907 user 0m0.056s 00:29:55.907 sys 0m0.061s 00:29:55.907 13:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.907 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:55.907 ************************************ 00:29:55.907 END TEST dd_invalid_arguments 00:29:55.907 ************************************ 00:29:55.907 13:53:35 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:55.907 13:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:55.907 13:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.907 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:55.907 ************************************ 00:29:55.907 START TEST dd_double_input 00:29:55.907 ************************************ 00:29:55.907 13:53:35 -- common/autotest_common.sh@1104 -- # double_input 00:29:55.907 13:53:35 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:55.907 13:53:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:55.907 13:53:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:55.907 13:53:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.907 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:55.907 13:53:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.907 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:55.907 13:53:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.907 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:55.907 13:53:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.907 13:53:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:55.907 13:53:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:55.907 [2024-07-10 13:53:35.199985] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:56.166 13:53:35 -- common/autotest_common.sh@643 -- # es=22 00:29:56.166 13:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:56.166 13:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:56.166 13:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:56.166 00:29:56.166 real 0m0.123s 00:29:56.166 user 0m0.064s 00:29:56.166 sys 0m0.060s 00:29:56.166 13:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.166 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.166 ************************************ 00:29:56.166 END TEST dd_double_input 00:29:56.166 ************************************ 00:29:56.166 13:53:35 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:56.166 13:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:56.166 13:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.166 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.166 ************************************ 00:29:56.166 START TEST dd_double_output 00:29:56.166 ************************************ 00:29:56.166 13:53:35 -- common/autotest_common.sh@1104 -- # double_output 00:29:56.166 13:53:35 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:56.166 13:53:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:56.166 13:53:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:56.166 13:53:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.166 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.166 13:53:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.166 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.166 13:53:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.167 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.167 13:53:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.167 13:53:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:56.167 13:53:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:56.167 [2024-07-10 13:53:35.379912] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:56.167 13:53:35 -- common/autotest_common.sh@643 -- # es=22 00:29:56.167 13:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:56.167 13:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:56.167 13:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:56.167 00:29:56.167 real 0m0.115s 00:29:56.167 user 0m0.065s 00:29:56.167 sys 0m0.051s 00:29:56.167 13:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.167 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.167 ************************************ 00:29:56.167 END TEST dd_double_output 00:29:56.167 ************************************ 00:29:56.167 13:53:35 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:56.167 13:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:56.167 13:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.167 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.167 ************************************ 00:29:56.167 START TEST dd_no_input 00:29:56.167 ************************************ 00:29:56.167 13:53:35 -- common/autotest_common.sh@1104 -- # no_input 00:29:56.167 13:53:35 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:56.167 13:53:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:56.167 13:53:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:56.167 13:53:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.167 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.167 13:53:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.167 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.167 13:53:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.167 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.167 13:53:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.167 13:53:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:56.167 13:53:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:56.425 [2024-07-10 13:53:35.546598] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:56.425 13:53:35 -- common/autotest_common.sh@643 -- # es=22 00:29:56.425 13:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:56.425 13:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:56.425 13:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:56.425 00:29:56.425 real 0m0.118s 00:29:56.425 user 0m0.071s 00:29:56.425 sys 0m0.048s 00:29:56.425 13:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.425 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.425 ************************************ 00:29:56.425 END TEST dd_no_input 00:29:56.425 ************************************ 00:29:56.425 13:53:35 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:56.425 13:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:56.425 13:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.425 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.425 ************************************ 00:29:56.425 START TEST dd_no_output 00:29:56.425 ************************************ 00:29:56.425 13:53:35 -- common/autotest_common.sh@1104 -- # no_output 00:29:56.425 13:53:35 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:56.425 13:53:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:56.425 13:53:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:56.425 13:53:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.425 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.425 13:53:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.425 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.425 13:53:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.425 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.425 13:53:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.425 13:53:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:56.425 13:53:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:56.425 [2024-07-10 13:53:35.727134] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:56.705 13:53:35 -- common/autotest_common.sh@643 -- # es=22 00:29:56.705 13:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:56.705 13:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:56.705 13:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:56.705 00:29:56.705 real 0m0.126s 00:29:56.705 user 0m0.079s 00:29:56.705 sys 0m0.047s 00:29:56.705 13:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.705 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.705 ************************************ 00:29:56.705 END TEST dd_no_output 00:29:56.705 ************************************ 00:29:56.705 13:53:35 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:56.705 13:53:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:56.705 13:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.705 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.705 ************************************ 00:29:56.705 START TEST dd_wrong_blocksize 00:29:56.705 ************************************ 00:29:56.705 13:53:35 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:29:56.705 13:53:35 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:56.705 13:53:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:56.705 13:53:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:56.705 13:53:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.705 13:53:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.705 13:53:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.705 13:53:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:56.705 13:53:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:56.705 [2024-07-10 13:53:35.918175] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:56.705 13:53:35 -- common/autotest_common.sh@643 -- # es=22 00:29:56.705 13:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:56.705 13:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:56.705 13:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:56.705 00:29:56.705 real 0m0.119s 00:29:56.705 user 0m0.067s 00:29:56.705 sys 0m0.052s 00:29:56.705 13:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.705 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:29:56.705 ************************************ 00:29:56.705 END TEST dd_wrong_blocksize 00:29:56.705 ************************************ 00:29:56.705 13:53:36 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:56.705 13:53:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:56.705 13:53:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.705 13:53:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.705 ************************************ 00:29:56.705 START TEST dd_smaller_blocksize 00:29:56.705 ************************************ 00:29:56.705 13:53:36 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:29:56.705 13:53:36 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:56.705 13:53:36 -- common/autotest_common.sh@640 -- # local es=0 00:29:56.705 13:53:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:56.705 13:53:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.705 13:53:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.705 13:53:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:56.705 13:53:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:56.705 13:53:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:56.705 13:53:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:56.968 [2024-07-10 13:53:36.095589] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:56.968 [2024-07-10 13:53:36.095757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140868 ] 00:29:56.968 [2024-07-10 13:53:36.253674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.226 [2024-07-10 13:53:36.476891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.794 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:57.794 [2024-07-10 13:53:37.113607] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:57.794 [2024-07-10 13:53:37.113692] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:58.732 [2024-07-10 13:53:38.010809] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:59.299 13:53:38 -- common/autotest_common.sh@643 -- # es=244 00:29:59.299 13:53:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:59.299 13:53:38 -- common/autotest_common.sh@652 -- # es=116 00:29:59.299 13:53:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:59.299 13:53:38 -- common/autotest_common.sh@660 -- # es=1 00:29:59.299 13:53:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:59.299 ************************************ 00:29:59.299 END TEST dd_smaller_blocksize 00:29:59.299 ************************************ 00:29:59.299 00:29:59.299 real 0m2.436s 00:29:59.299 user 0m1.895s 00:29:59.299 sys 0m0.439s 00:29:59.299 13:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.299 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.299 13:53:38 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:59.299 13:53:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.299 13:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.299 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.299 ************************************ 00:29:59.299 START TEST dd_invalid_count 00:29:59.299 ************************************ 00:29:59.299 13:53:38 -- common/autotest_common.sh@1104 -- # invalid_count 00:29:59.299 13:53:38 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:59.299 13:53:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:59.299 13:53:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:59.300 13:53:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.300 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.300 13:53:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.300 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.300 13:53:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.300 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.300 13:53:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.300 13:53:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:59.300 13:53:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:59.300 [2024-07-10 13:53:38.585038] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:59.300 13:53:38 -- common/autotest_common.sh@643 -- # es=22 00:29:59.300 13:53:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:59.300 13:53:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:59.300 13:53:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:59.300 00:29:59.300 real 0m0.113s 00:29:59.300 user 0m0.060s 00:29:59.300 sys 0m0.054s 00:29:59.300 13:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.300 ************************************ 00:29:59.300 END TEST dd_invalid_count 00:29:59.300 ************************************ 00:29:59.300 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.559 13:53:38 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:59.559 13:53:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.559 13:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.559 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.559 ************************************ 00:29:59.559 START TEST dd_invalid_oflag 00:29:59.559 ************************************ 00:29:59.559 13:53:38 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:29:59.559 13:53:38 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:59.559 13:53:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:59.559 13:53:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:59.559 13:53:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.559 13:53:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.559 13:53:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:59.559 13:53:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:59.559 [2024-07-10 13:53:38.761283] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:59.559 13:53:38 -- common/autotest_common.sh@643 -- # es=22 00:29:59.559 13:53:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:59.559 13:53:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:59.559 13:53:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:59.559 00:29:59.559 real 0m0.115s 00:29:59.559 user 0m0.062s 00:29:59.559 sys 0m0.055s 00:29:59.559 13:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.559 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.559 ************************************ 00:29:59.559 END TEST dd_invalid_oflag 00:29:59.559 ************************************ 00:29:59.559 13:53:38 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:59.559 13:53:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.559 13:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.559 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.559 ************************************ 00:29:59.559 START TEST dd_invalid_iflag 00:29:59.559 ************************************ 00:29:59.559 13:53:38 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:29:59.559 13:53:38 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:59.559 13:53:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:59.559 13:53:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:59.559 13:53:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.559 13:53:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.559 13:53:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.559 13:53:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:59.559 13:53:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:59.818 [2024-07-10 13:53:38.939203] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:59.818 13:53:38 -- common/autotest_common.sh@643 -- # es=22 00:29:59.818 13:53:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:59.818 13:53:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:59.818 13:53:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:59.818 00:29:59.818 real 0m0.113s 00:29:59.818 user 0m0.052s 00:29:59.818 sys 0m0.062s 00:29:59.818 13:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.818 13:53:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.818 ************************************ 00:29:59.818 END TEST dd_invalid_iflag 00:29:59.818 ************************************ 00:29:59.818 13:53:39 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:59.818 13:53:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:59.818 13:53:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.818 13:53:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.818 ************************************ 00:29:59.818 START TEST dd_unknown_flag 00:29:59.818 ************************************ 00:29:59.818 13:53:39 -- common/autotest_common.sh@1104 -- # unknown_flag 00:29:59.818 13:53:39 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:59.818 13:53:39 -- common/autotest_common.sh@640 -- # local es=0 00:29:59.818 13:53:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:59.818 13:53:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.818 13:53:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.818 13:53:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.818 13:53:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.818 13:53:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.818 13:53:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:59.818 13:53:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:59.818 13:53:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:59.818 13:53:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:59.819 [2024-07-10 13:53:39.113648] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:59.819 [2024-07-10 13:53:39.113777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140998 ] 00:30:00.078 [2024-07-10 13:53:39.271788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.338 [2024-07-10 13:53:39.492311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.597 [2024-07-10 13:53:39.868105] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:30:00.597 [2024-07-10 13:53:39.868198] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:30:00.597 [2024-07-10 13:53:39.868216] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:30:00.597 [2024-07-10 13:53:39.868251] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:01.533 [2024-07-10 13:53:40.782573] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:02.100 13:53:41 -- common/autotest_common.sh@643 -- # es=234 00:30:02.100 13:53:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:02.100 13:53:41 -- common/autotest_common.sh@652 -- # es=106 00:30:02.100 13:53:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:02.100 13:53:41 -- common/autotest_common.sh@660 -- # es=1 00:30:02.101 13:53:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:02.101 ************************************ 00:30:02.101 END TEST dd_unknown_flag 00:30:02.101 ************************************ 00:30:02.101 00:30:02.101 real 0m2.206s 00:30:02.101 user 0m1.895s 00:30:02.101 sys 0m0.211s 00:30:02.101 13:53:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.101 13:53:41 -- common/autotest_common.sh@10 -- # set +x 00:30:02.101 13:53:41 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:30:02.101 13:53:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:02.101 13:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:02.101 13:53:41 -- common/autotest_common.sh@10 -- # set +x 00:30:02.101 ************************************ 00:30:02.101 START TEST dd_invalid_json 00:30:02.101 ************************************ 00:30:02.101 13:53:41 -- common/autotest_common.sh@1104 -- # invalid_json 00:30:02.101 13:53:41 -- dd/negative_dd.sh@95 -- # : 00:30:02.101 13:53:41 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:02.101 13:53:41 -- common/autotest_common.sh@640 -- # local es=0 00:30:02.101 13:53:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:02.101 13:53:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.101 13:53:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:02.101 13:53:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.101 13:53:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:02.101 13:53:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.101 13:53:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:02.101 13:53:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:02.101 13:53:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:02.101 13:53:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:02.101 [2024-07-10 13:53:41.384289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:02.101 [2024-07-10 13:53:41.384449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141051 ] 00:30:02.360 [2024-07-10 13:53:41.546732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.620 [2024-07-10 13:53:41.777374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.620 [2024-07-10 13:53:41.777532] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:30:02.620 [2024-07-10 13:53:41.777564] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:02.620 [2024-07-10 13:53:41.777620] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:30:02.944 ************************************ 00:30:02.944 END TEST dd_invalid_json 00:30:02.944 ************************************ 00:30:02.944 13:53:42 -- common/autotest_common.sh@643 -- # es=234 00:30:02.944 13:53:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:02.944 13:53:42 -- common/autotest_common.sh@652 -- # es=106 00:30:02.944 13:53:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:30:02.944 13:53:42 -- common/autotest_common.sh@660 -- # es=1 00:30:02.944 13:53:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:02.944 00:30:02.944 real 0m0.922s 00:30:02.944 user 0m0.714s 00:30:02.944 sys 0m0.109s 00:30:02.944 13:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.944 13:53:42 -- common/autotest_common.sh@10 -- # set +x 00:30:02.944 00:30:02.944 real 0m7.470s 00:30:02.944 user 0m5.522s 00:30:02.944 sys 0m1.703s 00:30:02.944 13:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.944 13:53:42 -- common/autotest_common.sh@10 -- # set +x 00:30:02.944 ************************************ 00:30:02.944 END TEST spdk_dd_negative 00:30:02.944 ************************************ 00:30:03.203 00:30:03.203 real 3m1.921s 00:30:03.203 user 2m32.927s 00:30:03.203 sys 0m19.577s 00:30:03.203 13:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.203 13:53:42 -- common/autotest_common.sh@10 -- # set +x 00:30:03.203 ************************************ 00:30:03.203 END TEST spdk_dd 00:30:03.203 ************************************ 00:30:03.203 13:53:42 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:30:03.203 13:53:42 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:03.203 13:53:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:03.203 13:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.203 13:53:42 -- common/autotest_common.sh@10 -- # set +x 00:30:03.203 ************************************ 00:30:03.203 START TEST blockdev_nvme 00:30:03.203 ************************************ 00:30:03.203 13:53:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:03.203 * Looking for test storage... 00:30:03.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:03.203 13:53:42 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:03.203 13:53:42 -- bdev/nbd_common.sh@6 -- # set -e 00:30:03.203 13:53:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:03.203 13:53:42 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:03.203 13:53:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:03.203 13:53:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:03.203 13:53:42 -- bdev/blockdev.sh@18 -- # : 00:30:03.203 13:53:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:30:03.203 13:53:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:30:03.203 13:53:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:30:03.203 13:53:42 -- bdev/blockdev.sh@672 -- # uname -s 00:30:03.203 13:53:42 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:30:03.203 13:53:42 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:30:03.203 13:53:42 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:30:03.203 13:53:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:30:03.203 13:53:42 -- bdev/blockdev.sh@682 -- # dek= 00:30:03.203 13:53:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:30:03.203 13:53:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:30:03.203 13:53:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:30:03.203 13:53:42 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:30:03.203 13:53:42 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:30:03.203 13:53:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:30:03.204 13:53:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=141145 00:30:03.204 13:53:42 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:03.204 13:53:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:03.204 13:53:42 -- bdev/blockdev.sh@47 -- # waitforlisten 141145 00:30:03.204 13:53:42 -- common/autotest_common.sh@819 -- # '[' -z 141145 ']' 00:30:03.204 13:53:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.204 13:53:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:03.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.204 13:53:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.204 13:53:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:03.204 13:53:42 -- common/autotest_common.sh@10 -- # set +x 00:30:03.464 [2024-07-10 13:53:42.592713] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:03.464 [2024-07-10 13:53:42.592847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141145 ] 00:30:03.464 [2024-07-10 13:53:42.744165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.723 [2024-07-10 13:53:42.956647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:03.723 [2024-07-10 13:53:42.956870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.103 13:53:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.103 13:53:44 -- common/autotest_common.sh@852 -- # return 0 00:30:05.103 13:53:44 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:30:05.103 13:53:44 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:30:05.103 13:53:44 -- bdev/blockdev.sh@79 -- # local json 00:30:05.103 13:53:44 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:30:05.103 13:53:44 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:05.103 13:53:44 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:30:05.103 13:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.103 13:53:44 -- common/autotest_common.sh@10 -- # set +x 00:30:05.103 13:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.103 13:53:44 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:30:05.103 13:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.103 13:53:44 -- common/autotest_common.sh@10 -- # set +x 00:30:05.103 13:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.103 13:53:44 -- bdev/blockdev.sh@738 -- # cat 00:30:05.103 13:53:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:30:05.103 13:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.103 13:53:44 -- common/autotest_common.sh@10 -- # set +x 00:30:05.103 13:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.103 13:53:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:30:05.103 13:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.103 13:53:44 -- common/autotest_common.sh@10 -- # set +x 00:30:05.103 13:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.103 13:53:44 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:05.103 13:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.103 13:53:44 -- common/autotest_common.sh@10 -- # set +x 00:30:05.103 13:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.103 13:53:44 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:30:05.103 13:53:44 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:30:05.103 13:53:44 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:30:05.103 13:53:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.103 13:53:44 -- common/autotest_common.sh@10 -- # set +x 00:30:05.103 13:53:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.103 13:53:44 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:30:05.103 13:53:44 -- bdev/blockdev.sh@747 -- # jq -r .name 00:30:05.103 13:53:44 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6ec3820c-d3fd-4c83-bd80-4732052795eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6ec3820c-d3fd-4c83-bd80-4732052795eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:30:05.103 13:53:44 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:30:05.103 13:53:44 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:30:05.103 13:53:44 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:30:05.103 13:53:44 -- bdev/blockdev.sh@752 -- # killprocess 141145 00:30:05.103 13:53:44 -- common/autotest_common.sh@926 -- # '[' -z 141145 ']' 00:30:05.103 13:53:44 -- common/autotest_common.sh@930 -- # kill -0 141145 00:30:05.103 13:53:44 -- common/autotest_common.sh@931 -- # uname 00:30:05.103 13:53:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.103 13:53:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141145 00:30:05.103 13:53:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:05.103 13:53:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:05.103 13:53:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141145' 00:30:05.103 killing process with pid 141145 00:30:05.103 13:53:44 -- common/autotest_common.sh@945 -- # kill 141145 00:30:05.103 13:53:44 -- common/autotest_common.sh@950 -- # wait 141145 00:30:07.643 13:53:46 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.643 13:53:46 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:07.643 13:53:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:07.643 13:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:07.643 13:53:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.643 ************************************ 00:30:07.643 START TEST bdev_hello_world 00:30:07.643 ************************************ 00:30:07.643 13:53:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:07.643 [2024-07-10 13:53:46.676079] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:07.643 [2024-07-10 13:53:46.676253] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141259 ] 00:30:07.643 [2024-07-10 13:53:46.834378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.902 [2024-07-10 13:53:47.047565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.472 [2024-07-10 13:53:47.541397] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:08.472 [2024-07-10 13:53:47.541465] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:30:08.472 [2024-07-10 13:53:47.541487] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:08.472 [2024-07-10 13:53:47.544152] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:08.472 [2024-07-10 13:53:47.544682] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:08.472 [2024-07-10 13:53:47.544720] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:08.472 [2024-07-10 13:53:47.544898] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:08.472 00:30:08.472 [2024-07-10 13:53:47.544926] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:09.853 00:30:09.853 real 0m2.303s 00:30:09.853 user 0m2.003s 00:30:09.853 sys 0m0.200s 00:30:09.853 13:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.853 13:53:48 -- common/autotest_common.sh@10 -- # set +x 00:30:09.853 ************************************ 00:30:09.853 END TEST bdev_hello_world 00:30:09.853 ************************************ 00:30:09.853 13:53:48 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:09.853 13:53:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:09.853 13:53:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:09.853 13:53:48 -- common/autotest_common.sh@10 -- # set +x 00:30:09.853 ************************************ 00:30:09.853 START TEST bdev_bounds 00:30:09.853 ************************************ 00:30:09.853 13:53:48 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:30:09.853 13:53:48 -- bdev/blockdev.sh@288 -- # bdevio_pid=141314 00:30:09.853 13:53:48 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:09.854 13:53:48 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:09.854 13:53:48 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 141314' 00:30:09.854 Process bdevio pid: 141314 00:30:09.854 13:53:48 -- bdev/blockdev.sh@291 -- # waitforlisten 141314 00:30:09.854 13:53:48 -- common/autotest_common.sh@819 -- # '[' -z 141314 ']' 00:30:09.854 13:53:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.854 13:53:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.854 13:53:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.854 13:53:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.854 13:53:48 -- common/autotest_common.sh@10 -- # set +x 00:30:09.854 [2024-07-10 13:53:49.043490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:09.854 [2024-07-10 13:53:49.043653] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141314 ] 00:30:10.116 [2024-07-10 13:53:49.220375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:10.116 [2024-07-10 13:53:49.448883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.116 [2024-07-10 13:53:49.449042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.116 [2024-07-10 13:53:49.449060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.495 13:53:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.495 13:53:50 -- common/autotest_common.sh@852 -- # return 0 00:30:11.495 13:53:50 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:11.495 I/O targets: 00:30:11.495 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:30:11.495 00:30:11.495 00:30:11.495 CUnit - A unit testing framework for C - Version 2.1-3 00:30:11.495 http://cunit.sourceforge.net/ 00:30:11.495 00:30:11.495 00:30:11.495 Suite: bdevio tests on: Nvme0n1 00:30:11.495 Test: blockdev write read block ...passed 00:30:11.495 Test: blockdev write zeroes read block ...passed 00:30:11.495 Test: blockdev write zeroes read no split ...passed 00:30:11.495 Test: blockdev write zeroes read split ...passed 00:30:11.495 Test: blockdev write zeroes read split partial ...passed 00:30:11.495 Test: blockdev reset ...[2024-07-10 13:53:50.798969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:11.495 [2024-07-10 13:53:50.803252] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:11.495 passed 00:30:11.495 Test: blockdev write read 8 blocks ...passed 00:30:11.495 Test: blockdev write read size > 128k ...passed 00:30:11.495 Test: blockdev write read invalid size ...passed 00:30:11.495 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:11.495 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:11.495 Test: blockdev write read max offset ...passed 00:30:11.495 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:11.495 Test: blockdev writev readv 8 blocks ...passed 00:30:11.495 Test: blockdev writev readv 30 x 1block ...passed 00:30:11.495 Test: blockdev writev readv block ...passed 00:30:11.495 Test: blockdev writev readv size > 128k ...passed 00:30:11.495 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:11.495 Test: blockdev comparev and writev ...[2024-07-10 13:53:50.811221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xb020d000 len:0x1000 00:30:11.495 [2024-07-10 13:53:50.811311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:11.495 passed 00:30:11.495 Test: blockdev nvme passthru rw ...passed 00:30:11.495 Test: blockdev nvme passthru vendor specific ...[2024-07-10 13:53:50.811897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:11.495 [2024-07-10 13:53:50.811945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:11.495 passed 00:30:11.495 Test: blockdev nvme admin passthru ...passed 00:30:11.495 Test: blockdev copy ...passed 00:30:11.495 00:30:11.495 Run Summary: Type Total Ran Passed Failed Inactive 00:30:11.495 suites 1 1 n/a 0 0 00:30:11.495 tests 23 23 23 0 0 00:30:11.495 asserts 152 152 152 0 n/a 00:30:11.495 00:30:11.495 Elapsed time = 0.287 seconds 00:30:11.495 0 00:30:11.495 13:53:50 -- bdev/blockdev.sh@293 -- # killprocess 141314 00:30:11.495 13:53:50 -- common/autotest_common.sh@926 -- # '[' -z 141314 ']' 00:30:11.495 13:53:50 -- common/autotest_common.sh@930 -- # kill -0 141314 00:30:11.495 13:53:50 -- common/autotest_common.sh@931 -- # uname 00:30:11.495 13:53:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:11.495 13:53:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141314 00:30:11.755 13:53:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:11.755 13:53:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:11.755 killing process with pid 141314 00:30:11.755 13:53:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141314' 00:30:11.755 13:53:50 -- common/autotest_common.sh@945 -- # kill 141314 00:30:11.755 13:53:50 -- common/autotest_common.sh@950 -- # wait 141314 00:30:13.152 13:53:52 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:13.152 00:30:13.152 real 0m3.357s 00:30:13.152 user 0m8.518s 00:30:13.152 sys 0m0.312s 00:30:13.152 13:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.152 13:53:52 -- common/autotest_common.sh@10 -- # set +x 00:30:13.152 ************************************ 00:30:13.152 END TEST bdev_bounds 00:30:13.152 ************************************ 00:30:13.152 13:53:52 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:30:13.152 13:53:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:30:13.152 13:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.152 13:53:52 -- common/autotest_common.sh@10 -- # set +x 00:30:13.152 ************************************ 00:30:13.152 START TEST bdev_nbd 00:30:13.152 ************************************ 00:30:13.152 13:53:52 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:30:13.152 13:53:52 -- bdev/blockdev.sh@298 -- # uname -s 00:30:13.152 13:53:52 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:13.152 13:53:52 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:13.152 13:53:52 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:13.152 13:53:52 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:30:13.152 13:53:52 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:13.152 13:53:52 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:30:13.152 13:53:52 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:13.152 13:53:52 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:30:13.152 13:53:52 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:13.152 13:53:52 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:30:13.152 13:53:52 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:30:13.152 13:53:52 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:13.152 13:53:52 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:30:13.152 13:53:52 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:13.152 13:53:52 -- bdev/blockdev.sh@316 -- # nbd_pid=141390 00:30:13.152 13:53:52 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:13.152 13:53:52 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:13.152 13:53:52 -- bdev/blockdev.sh@318 -- # waitforlisten 141390 /var/tmp/spdk-nbd.sock 00:30:13.152 13:53:52 -- common/autotest_common.sh@819 -- # '[' -z 141390 ']' 00:30:13.152 13:53:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:13.152 13:53:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.152 13:53:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:13.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:13.152 13:53:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.152 13:53:52 -- common/autotest_common.sh@10 -- # set +x 00:30:13.152 [2024-07-10 13:53:52.472156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:13.152 [2024-07-10 13:53:52.472335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.412 [2024-07-10 13:53:52.616832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.672 [2024-07-10 13:53:52.820467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.049 13:53:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:15.049 13:53:54 -- common/autotest_common.sh@852 -- # return 0 00:30:15.049 13:53:54 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@24 -- # local i 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:15.049 13:53:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:15.049 13:53:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:15.049 13:53:54 -- common/autotest_common.sh@857 -- # local i 00:30:15.049 13:53:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:15.049 13:53:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:15.049 13:53:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:15.049 13:53:54 -- common/autotest_common.sh@861 -- # break 00:30:15.049 13:53:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:15.049 13:53:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:15.050 13:53:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:15.050 1+0 records in 00:30:15.050 1+0 records out 00:30:15.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045926 s, 8.9 MB/s 00:30:15.050 13:53:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:15.050 13:53:54 -- common/autotest_common.sh@874 -- # size=4096 00:30:15.050 13:53:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:15.050 13:53:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:15.050 13:53:54 -- common/autotest_common.sh@877 -- # return 0 00:30:15.050 13:53:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:15.050 13:53:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:30:15.050 13:53:54 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:15.309 { 00:30:15.309 "nbd_device": "/dev/nbd0", 00:30:15.309 "bdev_name": "Nvme0n1" 00:30:15.309 } 00:30:15.309 ]' 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:15.309 { 00:30:15.309 "nbd_device": "/dev/nbd0", 00:30:15.309 "bdev_name": "Nvme0n1" 00:30:15.309 } 00:30:15.309 ]' 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@51 -- # local i 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:15.309 13:53:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@41 -- # break 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@45 -- # return 0 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.567 13:53:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@65 -- # true 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@65 -- # count=0 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@122 -- # count=0 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@127 -- # return 0 00:30:15.831 13:53:54 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@12 -- # local i 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:15.831 13:53:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:30:16.111 /dev/nbd0 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:16.111 13:53:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:16.111 13:53:55 -- common/autotest_common.sh@857 -- # local i 00:30:16.111 13:53:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:16.111 13:53:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:16.111 13:53:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:16.111 13:53:55 -- common/autotest_common.sh@861 -- # break 00:30:16.111 13:53:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:16.111 13:53:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:16.111 13:53:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:16.111 1+0 records in 00:30:16.111 1+0 records out 00:30:16.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435973 s, 9.4 MB/s 00:30:16.111 13:53:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:16.111 13:53:55 -- common/autotest_common.sh@874 -- # size=4096 00:30:16.111 13:53:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:16.111 13:53:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:16.111 13:53:55 -- common/autotest_common.sh@877 -- # return 0 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.111 13:53:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:16.371 { 00:30:16.371 "nbd_device": "/dev/nbd0", 00:30:16.371 "bdev_name": "Nvme0n1" 00:30:16.371 } 00:30:16.371 ]' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:16.371 { 00:30:16.371 "nbd_device": "/dev/nbd0", 00:30:16.371 "bdev_name": "Nvme0n1" 00:30:16.371 } 00:30:16.371 ]' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@65 -- # count=1 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@66 -- # echo 1 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@95 -- # count=1 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:16.371 256+0 records in 00:30:16.371 256+0 records out 00:30:16.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012361 s, 84.8 MB/s 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:16.371 256+0 records in 00:30:16.371 256+0 records out 00:30:16.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0415441 s, 25.2 MB/s 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@51 -- # local i 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:16.371 13:53:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@41 -- # break 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@45 -- # return 0 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.630 13:53:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@65 -- # true 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@65 -- # count=0 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@104 -- # count=0 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@109 -- # return 0 00:30:16.888 13:53:56 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:16.888 13:53:56 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:17.146 malloc_lvol_verify 00:30:17.146 13:53:56 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:17.405 5fff4fd3-3534-4e55-b93e-afafab25725e 00:30:17.405 13:53:56 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:17.405 5e4bbb48-2e95-49f9-98ab-a39215194e0c 00:30:17.405 13:53:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:17.664 /dev/nbd0 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:17.664 mke2fs 1.45.5 (07-Jan-2020) 00:30:17.664 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:17.664 00:30:17.664 Filesystem too small for a journal 00:30:17.664 00:30:17.664 Allocating group tables: 0/1 done 00:30:17.664 Writing inode tables: 0/1 done 00:30:17.664 Writing superblocks and filesystem accounting information: 0/1 done 00:30:17.664 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@51 -- # local i 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:17.664 13:53:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@41 -- # break 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@45 -- # return 0 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:17.921 13:53:57 -- bdev/nbd_common.sh@147 -- # return 0 00:30:17.922 13:53:57 -- bdev/blockdev.sh@324 -- # killprocess 141390 00:30:17.922 13:53:57 -- common/autotest_common.sh@926 -- # '[' -z 141390 ']' 00:30:17.922 13:53:57 -- common/autotest_common.sh@930 -- # kill -0 141390 00:30:17.922 13:53:57 -- common/autotest_common.sh@931 -- # uname 00:30:17.922 13:53:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:17.922 13:53:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141390 00:30:17.922 killing process with pid 141390 00:30:17.922 13:53:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:17.922 13:53:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:17.922 13:53:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141390' 00:30:17.922 13:53:57 -- common/autotest_common.sh@945 -- # kill 141390 00:30:17.922 13:53:57 -- common/autotest_common.sh@950 -- # wait 141390 00:30:19.313 ************************************ 00:30:19.313 END TEST bdev_nbd 00:30:19.313 ************************************ 00:30:19.313 13:53:58 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:19.313 00:30:19.313 real 0m6.174s 00:30:19.313 user 0m8.493s 00:30:19.313 sys 0m1.104s 00:30:19.313 13:53:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.313 13:53:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.313 skipping fio tests on NVMe due to multi-ns failures. 00:30:19.313 13:53:58 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:19.313 13:53:58 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:30:19.313 13:53:58 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:19.313 13:53:58 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:19.313 13:53:58 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:19.313 13:53:58 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:19.313 13:53:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:19.313 13:53:58 -- common/autotest_common.sh@10 -- # set +x 00:30:19.313 ************************************ 00:30:19.313 START TEST bdev_verify 00:30:19.313 ************************************ 00:30:19.313 13:53:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:19.592 [2024-07-10 13:53:58.698990] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:19.592 [2024-07-10 13:53:58.699195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141610 ] 00:30:19.592 [2024-07-10 13:53:58.864020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.849 [2024-07-10 13:53:59.066810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.849 [2024-07-10 13:53:59.066814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.414 Running I/O for 5 seconds... 00:30:25.685 00:30:25.685 Latency(us) 00:30:25.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.685 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:25.685 Verification LBA range: start 0x0 length 0xa0000 00:30:25.685 Nvme0n1 : 5.01 18509.94 72.30 0.00 0.00 6883.36 497.24 15568.38 00:30:25.685 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:25.685 Verification LBA range: start 0xa0000 length 0xa0000 00:30:25.685 Nvme0n1 : 5.01 18360.02 71.72 0.00 0.00 6940.12 332.69 21063.10 00:30:25.685 =================================================================================================================== 00:30:25.685 Total : 36869.96 144.02 0.00 0.00 6911.63 332.69 21063.10 00:30:47.742 ************************************ 00:30:47.742 END TEST bdev_verify 00:30:47.742 00:30:47.742 real 0m27.621s 00:30:47.742 user 0m53.806s 00:30:47.742 sys 0m0.402s 00:30:47.742 13:54:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.742 13:54:26 -- common/autotest_common.sh@10 -- # set +x 00:30:47.742 ************************************ 00:30:47.742 13:54:26 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:47.742 13:54:26 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:47.742 13:54:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.742 13:54:26 -- common/autotest_common.sh@10 -- # set +x 00:30:47.742 ************************************ 00:30:47.742 START TEST bdev_verify_big_io 00:30:47.742 ************************************ 00:30:47.742 13:54:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:47.742 [2024-07-10 13:54:26.379121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:47.742 [2024-07-10 13:54:26.379332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141994 ] 00:30:47.742 [2024-07-10 13:54:26.542639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:47.742 [2024-07-10 13:54:26.779069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.742 [2024-07-10 13:54:26.779072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.001 Running I/O for 5 seconds... 00:30:53.269 00:30:53.269 Latency(us) 00:30:53.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.269 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:53.269 Verification LBA range: start 0x0 length 0xa000 00:30:53.269 Nvme0n1 : 5.03 1922.11 120.13 0.00 0.00 65687.07 654.64 99362.88 00:30:53.269 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:53.269 Verification LBA range: start 0xa000 length 0xa000 00:30:53.269 Nvme0n1 : 5.04 1756.82 109.80 0.00 0.00 71795.09 497.24 118136.51 00:30:53.269 =================================================================================================================== 00:30:53.269 Total : 3678.93 229.93 0.00 0.00 68605.62 497.24 118136.51 00:30:55.163 ************************************ 00:30:55.163 END TEST bdev_verify_big_io 00:30:55.163 ************************************ 00:30:55.163 00:30:55.163 real 0m7.932s 00:30:55.163 user 0m14.572s 00:30:55.163 sys 0m0.241s 00:30:55.163 13:54:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.163 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:30:55.163 13:54:34 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:55.163 13:54:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:55.163 13:54:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:55.163 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:30:55.163 ************************************ 00:30:55.163 START TEST bdev_write_zeroes 00:30:55.163 ************************************ 00:30:55.163 13:54:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:55.163 [2024-07-10 13:54:34.363017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:55.163 [2024-07-10 13:54:34.363288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142119 ] 00:30:55.422 [2024-07-10 13:54:34.524440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.422 [2024-07-10 13:54:34.745384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.988 Running I/O for 1 seconds... 00:30:56.956 00:30:56.956 Latency(us) 00:30:56.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.956 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:56.956 Nvme0n1 : 1.01 38677.80 151.09 0.00 0.00 3301.01 1101.81 24840.72 00:30:56.956 =================================================================================================================== 00:30:56.956 Total : 38677.80 151.09 0.00 0.00 3301.01 1101.81 24840.72 00:30:58.334 ************************************ 00:30:58.334 END TEST bdev_write_zeroes 00:30:58.335 ************************************ 00:30:58.335 00:30:58.335 real 0m3.319s 00:30:58.335 user 0m3.002s 00:30:58.335 sys 0m0.216s 00:30:58.335 13:54:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:58.335 13:54:37 -- common/autotest_common.sh@10 -- # set +x 00:30:58.335 13:54:37 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:58.335 13:54:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:58.335 13:54:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:58.335 13:54:37 -- common/autotest_common.sh@10 -- # set +x 00:30:58.593 ************************************ 00:30:58.593 START TEST bdev_json_nonenclosed 00:30:58.593 ************************************ 00:30:58.593 13:54:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:58.593 [2024-07-10 13:54:37.759940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:58.593 [2024-07-10 13:54:37.760284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142176 ] 00:30:58.593 [2024-07-10 13:54:37.926690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.852 [2024-07-10 13:54:38.138568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.852 [2024-07-10 13:54:38.138851] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:58.852 [2024-07-10 13:54:38.138924] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:59.418 ************************************ 00:30:59.418 END TEST bdev_json_nonenclosed 00:30:59.418 ************************************ 00:30:59.418 00:30:59.418 real 0m0.926s 00:30:59.418 user 0m0.696s 00:30:59.418 sys 0m0.129s 00:30:59.418 13:54:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.418 13:54:38 -- common/autotest_common.sh@10 -- # set +x 00:30:59.418 13:54:38 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:59.418 13:54:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:59.418 13:54:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:59.419 13:54:38 -- common/autotest_common.sh@10 -- # set +x 00:30:59.419 ************************************ 00:30:59.419 START TEST bdev_json_nonarray 00:30:59.419 ************************************ 00:30:59.419 13:54:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:59.419 [2024-07-10 13:54:38.737395] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:59.419 [2024-07-10 13:54:38.737642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142214 ] 00:30:59.676 [2024-07-10 13:54:38.898567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.933 [2024-07-10 13:54:39.115998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.933 [2024-07-10 13:54:39.116347] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:59.933 [2024-07-10 13:54:39.116435] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:00.497 ************************************ 00:31:00.497 END TEST bdev_json_nonarray 00:31:00.497 ************************************ 00:31:00.497 00:31:00.497 real 0m0.916s 00:31:00.497 user 0m0.691s 00:31:00.497 sys 0m0.125s 00:31:00.497 13:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.497 13:54:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 13:54:39 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:31:00.497 13:54:39 -- bdev/blockdev.sh@809 -- # cleanup 00:31:00.497 13:54:39 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:00.497 13:54:39 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:00.497 13:54:39 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:31:00.497 ************************************ 00:31:00.497 END TEST blockdev_nvme 00:31:00.497 ************************************ 00:31:00.497 00:31:00.497 real 0m57.247s 00:31:00.497 user 1m36.347s 00:31:00.497 sys 0m3.523s 00:31:00.497 13:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.497 13:54:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 13:54:39 -- spdk/autotest.sh@219 -- # uname -s 00:31:00.497 13:54:39 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:31:00.497 13:54:39 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:00.497 13:54:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:00.497 13:54:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.497 13:54:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 ************************************ 00:31:00.497 START TEST blockdev_nvme_gpt 00:31:00.497 ************************************ 00:31:00.497 13:54:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:00.497 * Looking for test storage... 00:31:00.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:00.497 13:54:39 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:00.497 13:54:39 -- bdev/nbd_common.sh@6 -- # set -e 00:31:00.497 13:54:39 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:00.497 13:54:39 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:00.497 13:54:39 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:00.497 13:54:39 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:00.497 13:54:39 -- bdev/blockdev.sh@18 -- # : 00:31:00.497 13:54:39 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:00.497 13:54:39 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:00.497 13:54:39 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:00.497 13:54:39 -- bdev/blockdev.sh@672 -- # uname -s 00:31:00.497 13:54:39 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:00.497 13:54:39 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:00.497 13:54:39 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:31:00.497 13:54:39 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:00.497 13:54:39 -- bdev/blockdev.sh@682 -- # dek= 00:31:00.497 13:54:39 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:00.497 13:54:39 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:00.497 13:54:39 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:00.497 13:54:39 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:31:00.497 13:54:39 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:00.497 13:54:39 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=142297 00:31:00.497 13:54:39 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:00.497 13:54:39 -- bdev/blockdev.sh@47 -- # waitforlisten 142297 00:31:00.497 13:54:39 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:00.497 13:54:39 -- common/autotest_common.sh@819 -- # '[' -z 142297 ']' 00:31:00.497 13:54:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.497 13:54:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:00.497 13:54:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.497 13:54:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:00.497 13:54:39 -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 [2024-07-10 13:54:39.916023] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:00.754 [2024-07-10 13:54:39.916302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142297 ] 00:31:00.754 [2024-07-10 13:54:40.082295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.012 [2024-07-10 13:54:40.307559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:01.012 [2024-07-10 13:54:40.307887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.386 13:54:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.386 13:54:41 -- common/autotest_common.sh@852 -- # return 0 00:31:02.386 13:54:41 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:02.386 13:54:41 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:31:02.386 13:54:41 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:02.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:02.644 Waiting for block devices as requested 00:31:02.644 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:02.901 13:54:42 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:31:02.901 13:54:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:31:02.901 13:54:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:31:02.901 13:54:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:31:02.901 13:54:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:31:02.901 13:54:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:31:02.901 13:54:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:31:02.901 13:54:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:02.901 13:54:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:31:02.901 13:54:42 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:31:02.901 13:54:42 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:31:02.901 13:54:42 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:31:02.901 13:54:42 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:31:02.901 13:54:42 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:31:02.901 13:54:42 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:31:02.901 13:54:42 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:31:02.901 13:54:42 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:31:02.901 BYT; 00:31:02.901 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:31:02.901 13:54:42 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:31:02.901 BYT; 00:31:02.901 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:31:02.901 13:54:42 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:31:02.901 13:54:42 -- bdev/blockdev.sh@114 -- # break 00:31:02.901 13:54:42 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:31:02.901 13:54:42 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:31:02.901 13:54:42 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:02.901 13:54:42 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:31:03.887 13:54:42 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:31:03.887 13:54:42 -- scripts/common.sh@410 -- # local spdk_guid 00:31:03.887 13:54:42 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:03.887 13:54:42 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:03.887 13:54:42 -- scripts/common.sh@415 -- # IFS='()' 00:31:03.887 13:54:42 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:31:03.887 13:54:42 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:03.887 13:54:42 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:31:03.887 13:54:42 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:03.887 13:54:42 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:03.887 13:54:42 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:03.887 13:54:42 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:31:03.887 13:54:42 -- scripts/common.sh@422 -- # local spdk_guid 00:31:03.887 13:54:42 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:03.887 13:54:42 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:03.887 13:54:42 -- scripts/common.sh@427 -- # IFS='()' 00:31:03.887 13:54:42 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:31:03.887 13:54:42 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:03.887 13:54:42 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:31:03.887 13:54:42 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:03.887 13:54:42 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:03.887 13:54:42 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:03.887 13:54:42 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:31:04.823 The operation has completed successfully. 00:31:04.823 13:54:44 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:31:05.758 The operation has completed successfully. 00:31:05.758 13:54:45 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:06.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:06.325 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:07.262 13:54:46 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 [] 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:31:07.262 13:54:46 -- bdev/blockdev.sh@79 -- # local json 00:31:07.262 13:54:46 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:31:07.262 13:54:46 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:07.262 13:54:46 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@738 -- # cat 00:31:07.262 13:54:46 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:07.262 13:54:46 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:07.262 13:54:46 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:07.262 13:54:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:07.262 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:31:07.262 13:54:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:07.262 13:54:46 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:07.262 13:54:46 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:07.262 13:54:46 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:31:07.565 13:54:46 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:07.566 13:54:46 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:31:07.566 13:54:46 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:07.566 13:54:46 -- bdev/blockdev.sh@752 -- # killprocess 142297 00:31:07.566 13:54:46 -- common/autotest_common.sh@926 -- # '[' -z 142297 ']' 00:31:07.566 13:54:46 -- common/autotest_common.sh@930 -- # kill -0 142297 00:31:07.566 13:54:46 -- common/autotest_common.sh@931 -- # uname 00:31:07.566 13:54:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:07.566 13:54:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142297 00:31:07.566 13:54:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:07.566 13:54:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:07.566 13:54:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142297' 00:31:07.566 killing process with pid 142297 00:31:07.566 13:54:46 -- common/autotest_common.sh@945 -- # kill 142297 00:31:07.566 13:54:46 -- common/autotest_common.sh@950 -- # wait 142297 00:31:10.102 13:54:49 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:10.102 13:54:49 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:31:10.102 13:54:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:10.102 13:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:10.102 13:54:49 -- common/autotest_common.sh@10 -- # set +x 00:31:10.102 ************************************ 00:31:10.102 START TEST bdev_hello_world 00:31:10.102 ************************************ 00:31:10.102 13:54:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:31:10.102 [2024-07-10 13:54:49.370004] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:10.102 [2024-07-10 13:54:49.370244] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142870 ] 00:31:10.360 [2024-07-10 13:54:49.534678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.618 [2024-07-10 13:54:49.788923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.193 [2024-07-10 13:54:50.350684] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:11.193 [2024-07-10 13:54:50.350838] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:31:11.193 [2024-07-10 13:54:50.350888] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:11.193 [2024-07-10 13:54:50.353965] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:11.193 [2024-07-10 13:54:50.354601] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:11.193 [2024-07-10 13:54:50.354672] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:11.193 [2024-07-10 13:54:50.354874] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:11.193 00:31:11.193 [2024-07-10 13:54:50.354934] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:12.570 ************************************ 00:31:12.570 END TEST bdev_hello_world 00:31:12.570 ************************************ 00:31:12.570 00:31:12.570 real 0m2.575s 00:31:12.570 user 0m2.227s 00:31:12.570 sys 0m0.248s 00:31:12.570 13:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:12.570 13:54:51 -- common/autotest_common.sh@10 -- # set +x 00:31:12.829 13:54:51 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:12.829 13:54:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:12.829 13:54:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.829 13:54:51 -- common/autotest_common.sh@10 -- # set +x 00:31:12.829 ************************************ 00:31:12.829 START TEST bdev_bounds 00:31:12.829 ************************************ 00:31:12.829 13:54:51 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:12.829 13:54:51 -- bdev/blockdev.sh@288 -- # bdevio_pid=142920 00:31:12.829 13:54:51 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:12.829 13:54:51 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:12.829 13:54:51 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 142920' 00:31:12.829 Process bdevio pid: 142920 00:31:12.829 13:54:51 -- bdev/blockdev.sh@291 -- # waitforlisten 142920 00:31:12.829 13:54:51 -- common/autotest_common.sh@819 -- # '[' -z 142920 ']' 00:31:12.829 13:54:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.829 13:54:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:12.829 13:54:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.829 13:54:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:12.829 13:54:51 -- common/autotest_common.sh@10 -- # set +x 00:31:12.829 [2024-07-10 13:54:52.011732] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:12.829 [2024-07-10 13:54:52.012038] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142920 ] 00:31:13.090 [2024-07-10 13:54:52.184295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:13.090 [2024-07-10 13:54:52.437766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.090 [2024-07-10 13:54:52.437826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.090 [2024-07-10 13:54:52.437831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.468 13:54:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:14.468 13:54:53 -- common/autotest_common.sh@852 -- # return 0 00:31:14.468 13:54:53 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:14.468 I/O targets: 00:31:14.468 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:31:14.468 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:31:14.468 00:31:14.468 00:31:14.468 CUnit - A unit testing framework for C - Version 2.1-3 00:31:14.468 http://cunit.sourceforge.net/ 00:31:14.468 00:31:14.468 00:31:14.468 Suite: bdevio tests on: Nvme0n1p2 00:31:14.468 Test: blockdev write read block ...passed 00:31:14.468 Test: blockdev write zeroes read block ...passed 00:31:14.468 Test: blockdev write zeroes read no split ...passed 00:31:14.468 Test: blockdev write zeroes read split ...passed 00:31:14.468 Test: blockdev write zeroes read split partial ...passed 00:31:14.468 Test: blockdev reset ...[2024-07-10 13:54:53.770462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:14.468 [2024-07-10 13:54:53.774847] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:14.468 passed 00:31:14.468 Test: blockdev write read 8 blocks ...passed 00:31:14.468 Test: blockdev write read size > 128k ...passed 00:31:14.468 Test: blockdev write read invalid size ...passed 00:31:14.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.468 Test: blockdev write read max offset ...passed 00:31:14.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.468 Test: blockdev writev readv 8 blocks ...passed 00:31:14.468 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.468 Test: blockdev writev readv block ...passed 00:31:14.468 Test: blockdev writev readv size > 128k ...passed 00:31:14.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:14.468 Test: blockdev comparev and writev ...[2024-07-10 13:54:53.784291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xa060b000 len:0x1000 00:31:14.468 [2024-07-10 13:54:53.784422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:14.468 passed 00:31:14.468 Test: blockdev nvme passthru rw ...passed 00:31:14.468 Test: blockdev nvme passthru vendor specific ...passed 00:31:14.468 Test: blockdev nvme admin passthru ...passed 00:31:14.468 Test: blockdev copy ...passed 00:31:14.468 Suite: bdevio tests on: Nvme0n1p1 00:31:14.468 Test: blockdev write read block ...passed 00:31:14.468 Test: blockdev write zeroes read block ...passed 00:31:14.468 Test: blockdev write zeroes read no split ...passed 00:31:14.728 Test: blockdev write zeroes read split ...passed 00:31:14.728 Test: blockdev write zeroes read split partial ...passed 00:31:14.728 Test: blockdev reset ...[2024-07-10 13:54:53.872704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:14.728 [2024-07-10 13:54:53.876600] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:14.728 passed 00:31:14.728 Test: blockdev write read 8 blocks ...passed 00:31:14.728 Test: blockdev write read size > 128k ...passed 00:31:14.728 Test: blockdev write read invalid size ...passed 00:31:14.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.728 Test: blockdev write read max offset ...passed 00:31:14.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.728 Test: blockdev writev readv 8 blocks ...passed 00:31:14.728 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.728 Test: blockdev writev readv block ...passed 00:31:14.728 Test: blockdev writev readv size > 128k ...passed 00:31:14.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:14.728 Test: blockdev comparev and writev ...[2024-07-10 13:54:53.884651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xa060d000 len:0x1000 00:31:14.728 [2024-07-10 13:54:53.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:14.728 passed 00:31:14.728 Test: blockdev nvme passthru rw ...passed 00:31:14.728 Test: blockdev nvme passthru vendor specific ...passed 00:31:14.728 Test: blockdev nvme admin passthru ...passed 00:31:14.728 Test: blockdev copy ...passed 00:31:14.728 00:31:14.728 Run Summary: Type Total Ran Passed Failed Inactive 00:31:14.728 suites 2 2 n/a 0 0 00:31:14.728 tests 46 46 46 0 0 00:31:14.728 asserts 284 284 284 0 n/a 00:31:14.728 00:31:14.728 Elapsed time = 0.574 seconds 00:31:14.728 0 00:31:14.728 13:54:53 -- bdev/blockdev.sh@293 -- # killprocess 142920 00:31:14.728 13:54:53 -- common/autotest_common.sh@926 -- # '[' -z 142920 ']' 00:31:14.728 13:54:53 -- common/autotest_common.sh@930 -- # kill -0 142920 00:31:14.728 13:54:53 -- common/autotest_common.sh@931 -- # uname 00:31:14.728 13:54:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:14.728 13:54:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142920 00:31:14.728 killing process with pid 142920 00:31:14.728 13:54:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:14.728 13:54:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:14.728 13:54:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142920' 00:31:14.728 13:54:53 -- common/autotest_common.sh@945 -- # kill 142920 00:31:14.728 13:54:53 -- common/autotest_common.sh@950 -- # wait 142920 00:31:16.637 ************************************ 00:31:16.637 END TEST bdev_bounds 00:31:16.637 ************************************ 00:31:16.637 13:54:55 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:16.637 00:31:16.637 real 0m3.566s 00:31:16.637 user 0m8.923s 00:31:16.637 sys 0m0.339s 00:31:16.637 13:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.637 13:54:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.637 13:54:55 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:31:16.637 13:54:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:16.637 13:54:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.637 13:54:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.637 ************************************ 00:31:16.637 START TEST bdev_nbd 00:31:16.637 ************************************ 00:31:16.637 13:54:55 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:31:16.637 13:54:55 -- bdev/blockdev.sh@298 -- # uname -s 00:31:16.637 13:54:55 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:16.637 13:54:55 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:16.637 13:54:55 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:16.637 13:54:55 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:31:16.637 13:54:55 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:16.637 13:54:55 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:31:16.637 13:54:55 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:16.637 13:54:55 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:31:16.637 13:54:55 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:16.637 13:54:55 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:31:16.637 13:54:55 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:31:16.637 13:54:55 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:16.637 13:54:55 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:31:16.637 13:54:55 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:16.637 13:54:55 -- bdev/blockdev.sh@316 -- # nbd_pid=143023 00:31:16.637 13:54:55 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:16.637 13:54:55 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:16.637 13:54:55 -- bdev/blockdev.sh@318 -- # waitforlisten 143023 /var/tmp/spdk-nbd.sock 00:31:16.637 13:54:55 -- common/autotest_common.sh@819 -- # '[' -z 143023 ']' 00:31:16.637 13:54:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:16.637 13:54:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:16.637 13:54:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:16.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:16.637 13:54:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:16.637 13:54:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.637 [2024-07-10 13:54:55.640551] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:16.637 [2024-07-10 13:54:55.640769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.637 [2024-07-10 13:54:55.807658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.897 [2024-07-10 13:54:56.037968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.492 13:54:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:17.493 13:54:56 -- common/autotest_common.sh@852 -- # return 0 00:31:17.493 13:54:56 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@24 -- # local i 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:17.493 13:54:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:17.493 13:54:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:17.493 13:54:56 -- common/autotest_common.sh@857 -- # local i 00:31:17.493 13:54:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:17.493 13:54:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:17.493 13:54:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:17.752 13:54:56 -- common/autotest_common.sh@861 -- # break 00:31:17.752 13:54:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:17.752 13:54:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:17.752 13:54:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:17.752 1+0 records in 00:31:17.752 1+0 records out 00:31:17.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685853 s, 6.0 MB/s 00:31:17.752 13:54:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.752 13:54:56 -- common/autotest_common.sh@874 -- # size=4096 00:31:17.752 13:54:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.752 13:54:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:17.752 13:54:56 -- common/autotest_common.sh@877 -- # return 0 00:31:17.752 13:54:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:17.752 13:54:56 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:17.752 13:54:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:31:17.752 13:54:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:17.752 13:54:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:17.752 13:54:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:17.752 13:54:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:17.752 13:54:57 -- common/autotest_common.sh@857 -- # local i 00:31:17.752 13:54:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:17.752 13:54:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:17.752 13:54:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:17.752 13:54:57 -- common/autotest_common.sh@861 -- # break 00:31:17.752 13:54:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:17.752 13:54:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:17.752 13:54:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:17.752 1+0 records in 00:31:17.752 1+0 records out 00:31:17.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811231 s, 5.0 MB/s 00:31:17.752 13:54:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.752 13:54:57 -- common/autotest_common.sh@874 -- # size=4096 00:31:17.752 13:54:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:18.012 13:54:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:18.012 13:54:57 -- common/autotest_common.sh@877 -- # return 0 00:31:18.012 13:54:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:18.012 13:54:57 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:18.012 13:54:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:18.012 13:54:57 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:18.012 { 00:31:18.012 "nbd_device": "/dev/nbd0", 00:31:18.012 "bdev_name": "Nvme0n1p1" 00:31:18.012 }, 00:31:18.012 { 00:31:18.012 "nbd_device": "/dev/nbd1", 00:31:18.012 "bdev_name": "Nvme0n1p2" 00:31:18.012 } 00:31:18.013 ]' 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:18.013 { 00:31:18.013 "nbd_device": "/dev/nbd0", 00:31:18.013 "bdev_name": "Nvme0n1p1" 00:31:18.013 }, 00:31:18.013 { 00:31:18.013 "nbd_device": "/dev/nbd1", 00:31:18.013 "bdev_name": "Nvme0n1p2" 00:31:18.013 } 00:31:18.013 ]' 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@51 -- # local i 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:18.013 13:54:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@41 -- # break 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:18.272 13:54:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@41 -- # break 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:18.531 13:54:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:18.791 13:54:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:18.791 13:54:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:18.791 13:54:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:18.791 13:54:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:18.791 13:54:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:18.791 13:54:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@65 -- # true 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@65 -- # count=0 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@122 -- # count=0 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@127 -- # return 0 00:31:19.051 13:54:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@12 -- # local i 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:31:19.051 /dev/nbd0 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:19.051 13:54:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:19.051 13:54:58 -- common/autotest_common.sh@857 -- # local i 00:31:19.051 13:54:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:19.051 13:54:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:19.051 13:54:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:19.051 13:54:58 -- common/autotest_common.sh@861 -- # break 00:31:19.051 13:54:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:19.051 13:54:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:19.051 13:54:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:19.051 1+0 records in 00:31:19.051 1+0 records out 00:31:19.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000883687 s, 4.6 MB/s 00:31:19.051 13:54:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:19.051 13:54:58 -- common/autotest_common.sh@874 -- # size=4096 00:31:19.051 13:54:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:19.051 13:54:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:19.051 13:54:58 -- common/autotest_common.sh@877 -- # return 0 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:19.051 13:54:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:31:19.310 /dev/nbd1 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:19.310 13:54:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:19.310 13:54:58 -- common/autotest_common.sh@857 -- # local i 00:31:19.310 13:54:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:19.310 13:54:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:19.310 13:54:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:19.310 13:54:58 -- common/autotest_common.sh@861 -- # break 00:31:19.310 13:54:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:19.310 13:54:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:19.310 13:54:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:19.310 1+0 records in 00:31:19.310 1+0 records out 00:31:19.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635585 s, 6.4 MB/s 00:31:19.310 13:54:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:19.310 13:54:58 -- common/autotest_common.sh@874 -- # size=4096 00:31:19.310 13:54:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:19.310 13:54:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:19.310 13:54:58 -- common/autotest_common.sh@877 -- # return 0 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.310 13:54:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:19.569 { 00:31:19.569 "nbd_device": "/dev/nbd0", 00:31:19.569 "bdev_name": "Nvme0n1p1" 00:31:19.569 }, 00:31:19.569 { 00:31:19.569 "nbd_device": "/dev/nbd1", 00:31:19.569 "bdev_name": "Nvme0n1p2" 00:31:19.569 } 00:31:19.569 ]' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:19.569 { 00:31:19.569 "nbd_device": "/dev/nbd0", 00:31:19.569 "bdev_name": "Nvme0n1p1" 00:31:19.569 }, 00:31:19.569 { 00:31:19.569 "nbd_device": "/dev/nbd1", 00:31:19.569 "bdev_name": "Nvme0n1p2" 00:31:19.569 } 00:31:19.569 ]' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:19.569 /dev/nbd1' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:19.569 /dev/nbd1' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@65 -- # count=2 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@95 -- # count=2 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:19.569 256+0 records in 00:31:19.569 256+0 records out 00:31:19.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581678 s, 180 MB/s 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:19.569 13:54:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:19.828 256+0 records in 00:31:19.828 256+0 records out 00:31:19.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0623533 s, 16.8 MB/s 00:31:19.828 13:54:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:19.828 13:54:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:19.828 256+0 records in 00:31:19.828 256+0 records out 00:31:19.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0580687 s, 18.1 MB/s 00:31:19.828 13:54:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:19.828 13:54:59 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:19.828 13:54:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:19.828 13:54:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@51 -- # local i 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:19.829 13:54:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@41 -- # break 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@45 -- # return 0 00:31:20.087 13:54:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.088 13:54:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@41 -- # break 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@45 -- # return 0 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.346 13:54:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@65 -- # true 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@65 -- # count=0 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@104 -- # count=0 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@109 -- # return 0 00:31:20.604 13:54:59 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:20.604 13:54:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:20.930 malloc_lvol_verify 00:31:20.930 13:55:00 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:21.190 9ddb9e1c-dfd0-4da8-a2ab-4af1f52156bd 00:31:21.190 13:55:00 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:21.190 c1de06f5-c34e-4a3f-8db2-61729d7042eb 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:21.449 /dev/nbd0 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:21.449 mke2fs 1.45.5 (07-Jan-2020) 00:31:21.449 00:31:21.449 Filesystem too small for a journal 00:31:21.449 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:21.449 00:31:21.449 Allocating group tables: 0/1 done 00:31:21.449 Writing inode tables: 0/1 done 00:31:21.449 Writing superblocks and filesystem accounting information: 0/1 done 00:31:21.449 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@51 -- # local i 00:31:21.449 13:55:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:21.450 13:55:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@41 -- # break 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:21.709 13:55:01 -- bdev/nbd_common.sh@147 -- # return 0 00:31:21.709 13:55:01 -- bdev/blockdev.sh@324 -- # killprocess 143023 00:31:21.709 13:55:01 -- common/autotest_common.sh@926 -- # '[' -z 143023 ']' 00:31:21.709 13:55:01 -- common/autotest_common.sh@930 -- # kill -0 143023 00:31:21.709 13:55:01 -- common/autotest_common.sh@931 -- # uname 00:31:21.709 13:55:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.709 13:55:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143023 00:31:21.709 13:55:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:21.709 13:55:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:21.709 13:55:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143023' 00:31:21.709 killing process with pid 143023 00:31:21.709 13:55:01 -- common/autotest_common.sh@945 -- # kill 143023 00:31:21.709 13:55:01 -- common/autotest_common.sh@950 -- # wait 143023 00:31:23.613 ************************************ 00:31:23.613 END TEST bdev_nbd 00:31:23.613 ************************************ 00:31:23.613 13:55:02 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:23.613 00:31:23.613 real 0m7.021s 00:31:23.613 user 0m9.767s 00:31:23.613 sys 0m1.502s 00:31:23.613 13:55:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.613 13:55:02 -- common/autotest_common.sh@10 -- # set +x 00:31:23.613 13:55:02 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:23.613 13:55:02 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:31:23.613 13:55:02 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:31:23.613 13:55:02 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:31:23.613 skipping fio tests on NVMe due to multi-ns failures. 00:31:23.613 13:55:02 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:23.613 13:55:02 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:23.613 13:55:02 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:31:23.613 13:55:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:23.613 13:55:02 -- common/autotest_common.sh@10 -- # set +x 00:31:23.613 ************************************ 00:31:23.613 START TEST bdev_verify 00:31:23.613 ************************************ 00:31:23.613 13:55:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:23.613 [2024-07-10 13:55:02.701046] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:23.613 [2024-07-10 13:55:02.701274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143284 ] 00:31:23.613 [2024-07-10 13:55:02.866144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:23.871 [2024-07-10 13:55:03.083344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.871 [2024-07-10 13:55:03.083349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.438 Running I/O for 5 seconds... 00:31:29.740 00:31:29.740 Latency(us) 00:31:29.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.740 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:29.740 Verification LBA range: start 0x0 length 0x4ff80 00:31:29.740 Nvme0n1p1 : 5.02 7854.83 30.68 0.00 0.00 16247.55 2861.83 24497.30 00:31:29.740 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:29.740 Verification LBA range: start 0x4ff80 length 0x4ff80 00:31:29.740 Nvme0n1p1 : 5.02 7805.09 30.49 0.00 0.00 16354.05 1652.71 25642.03 00:31:29.740 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:29.740 Verification LBA range: start 0x0 length 0x4ff7f 00:31:29.740 Nvme0n1p2 : 5.02 7856.64 30.69 0.00 0.00 16227.36 854.97 20376.26 00:31:29.740 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:29.740 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:31:29.740 Nvme0n1p2 : 5.02 7805.16 30.49 0.00 0.00 16332.88 987.33 25413.09 00:31:29.740 =================================================================================================================== 00:31:29.740 Total : 31321.72 122.35 0.00 0.00 16290.28 854.97 25642.03 00:31:39.835 ************************************ 00:31:39.835 END TEST bdev_verify 00:31:39.835 ************************************ 00:31:39.835 00:31:39.835 real 0m15.760s 00:31:39.835 user 0m30.143s 00:31:39.835 sys 0m0.320s 00:31:39.835 13:55:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.835 13:55:18 -- common/autotest_common.sh@10 -- # set +x 00:31:39.835 13:55:18 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:39.835 13:55:18 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:31:39.835 13:55:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.835 13:55:18 -- common/autotest_common.sh@10 -- # set +x 00:31:39.835 ************************************ 00:31:39.835 START TEST bdev_verify_big_io 00:31:39.835 ************************************ 00:31:39.835 13:55:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:39.835 [2024-07-10 13:55:18.510986] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:39.835 [2024-07-10 13:55:18.511330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143507 ] 00:31:39.835 [2024-07-10 13:55:18.687141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:39.835 [2024-07-10 13:55:18.905687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.835 [2024-07-10 13:55:18.905692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.101 Running I/O for 5 seconds... 00:31:45.405 00:31:45.405 Latency(us) 00:31:45.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.405 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:45.405 Verification LBA range: start 0x0 length 0x4ff8 00:31:45.405 Nvme0n1p1 : 5.08 914.10 57.13 0.00 0.00 138537.11 2289.47 180410.02 00:31:45.405 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:45.405 Verification LBA range: start 0x4ff8 length 0x4ff8 00:31:45.405 Nvme0n1p1 : 5.09 912.87 57.05 0.00 0.00 138768.00 2275.16 188652.10 00:31:45.406 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:45.406 Verification LBA range: start 0x0 length 0x4ff7 00:31:45.406 Nvme0n1p2 : 5.08 921.96 57.62 0.00 0.00 135879.71 3233.87 154767.99 00:31:45.406 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:45.406 Verification LBA range: start 0x4ff7 length 0x4ff7 00:31:45.406 Nvme0n1p2 : 5.09 921.00 57.56 0.00 0.00 136090.70 661.80 177662.66 00:31:45.406 =================================================================================================================== 00:31:45.406 Total : 3669.93 229.37 0.00 0.00 137312.31 661.80 188652.10 00:31:47.312 ************************************ 00:31:47.312 END TEST bdev_verify_big_io 00:31:47.312 ************************************ 00:31:47.312 00:31:47.312 real 0m7.977s 00:31:47.312 user 0m14.671s 00:31:47.312 sys 0m0.284s 00:31:47.312 13:55:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.312 13:55:26 -- common/autotest_common.sh@10 -- # set +x 00:31:47.312 13:55:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:47.312 13:55:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:47.312 13:55:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:47.312 13:55:26 -- common/autotest_common.sh@10 -- # set +x 00:31:47.312 ************************************ 00:31:47.312 START TEST bdev_write_zeroes 00:31:47.312 ************************************ 00:31:47.312 13:55:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:47.312 [2024-07-10 13:55:26.546866] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:47.312 [2024-07-10 13:55:26.547103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143644 ] 00:31:47.572 [2024-07-10 13:55:26.707300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.830 [2024-07-10 13:55:26.938930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.090 Running I/O for 1 seconds... 00:31:49.469 00:31:49.469 Latency(us) 00:31:49.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.469 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:49.469 Nvme0n1p1 : 1.01 18222.43 71.18 0.00 0.00 7007.96 3033.54 17285.48 00:31:49.469 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:49.469 Nvme0n1p2 : 1.01 18073.49 70.60 0.00 0.00 7060.93 2675.81 20719.68 00:31:49.469 =================================================================================================================== 00:31:49.469 Total : 36295.91 141.78 0.00 0.00 7034.32 2675.81 20719.68 00:31:50.405 ************************************ 00:31:50.405 END TEST bdev_write_zeroes 00:31:50.405 ************************************ 00:31:50.405 00:31:50.405 real 0m3.193s 00:31:50.405 user 0m2.855s 00:31:50.405 sys 0m0.237s 00:31:50.405 13:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:50.405 13:55:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.405 13:55:29 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:50.405 13:55:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:50.405 13:55:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:50.405 13:55:29 -- common/autotest_common.sh@10 -- # set +x 00:31:50.405 ************************************ 00:31:50.405 START TEST bdev_json_nonenclosed 00:31:50.405 ************************************ 00:31:50.405 13:55:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:50.664 [2024-07-10 13:55:29.808779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:50.664 [2024-07-10 13:55:29.809638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143700 ] 00:31:50.664 [2024-07-10 13:55:29.993176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.922 [2024-07-10 13:55:30.208212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.922 [2024-07-10 13:55:30.208475] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:50.922 [2024-07-10 13:55:30.208543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:51.489 ************************************ 00:31:51.489 END TEST bdev_json_nonenclosed 00:31:51.489 ************************************ 00:31:51.489 00:31:51.489 real 0m0.936s 00:31:51.489 user 0m0.698s 00:31:51.489 sys 0m0.137s 00:31:51.489 13:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.490 13:55:30 -- common/autotest_common.sh@10 -- # set +x 00:31:51.490 13:55:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:51.490 13:55:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:51.490 13:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:51.490 13:55:30 -- common/autotest_common.sh@10 -- # set +x 00:31:51.490 ************************************ 00:31:51.490 START TEST bdev_json_nonarray 00:31:51.490 ************************************ 00:31:51.490 13:55:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:51.490 [2024-07-10 13:55:30.796225] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:51.490 [2024-07-10 13:55:30.796494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143738 ] 00:31:51.748 [2024-07-10 13:55:30.957381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.007 [2024-07-10 13:55:31.190705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.007 [2024-07-10 13:55:31.190985] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:52.007 [2024-07-10 13:55:31.191070] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:52.574 00:31:52.574 real 0m0.914s 00:31:52.574 user 0m0.681s 00:31:52.574 sys 0m0.133s 00:31:52.574 13:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.574 13:55:31 -- common/autotest_common.sh@10 -- # set +x 00:31:52.574 ************************************ 00:31:52.574 END TEST bdev_json_nonarray 00:31:52.574 ************************************ 00:31:52.574 13:55:31 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:31:52.574 13:55:31 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:31:52.574 13:55:31 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:31:52.574 13:55:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:52.574 13:55:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:52.574 13:55:31 -- common/autotest_common.sh@10 -- # set +x 00:31:52.574 ************************************ 00:31:52.574 START TEST bdev_gpt_uuid 00:31:52.574 ************************************ 00:31:52.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.574 13:55:31 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:31:52.574 13:55:31 -- bdev/blockdev.sh@612 -- # local bdev 00:31:52.574 13:55:31 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:31:52.574 13:55:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=143776 00:31:52.574 13:55:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:52.574 13:55:31 -- bdev/blockdev.sh@47 -- # waitforlisten 143776 00:31:52.574 13:55:31 -- common/autotest_common.sh@819 -- # '[' -z 143776 ']' 00:31:52.574 13:55:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.574 13:55:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:52.574 13:55:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.574 13:55:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:52.574 13:55:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:52.574 13:55:31 -- common/autotest_common.sh@10 -- # set +x 00:31:52.574 [2024-07-10 13:55:31.767070] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:52.574 [2024-07-10 13:55:31.767268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143776 ] 00:31:52.834 [2024-07-10 13:55:31.926405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.834 [2024-07-10 13:55:32.156001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:52.834 [2024-07-10 13:55:32.156350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.214 13:55:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:54.214 13:55:33 -- common/autotest_common.sh@852 -- # return 0 00:31:54.214 13:55:33 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:54.214 13:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.214 13:55:33 -- common/autotest_common.sh@10 -- # set +x 00:31:54.214 Some configs were skipped because the RPC state that can call them passed over. 00:31:54.214 13:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.215 13:55:33 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:31:54.215 13:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.215 13:55:33 -- common/autotest_common.sh@10 -- # set +x 00:31:54.215 13:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.215 13:55:33 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:31:54.215 13:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.215 13:55:33 -- common/autotest_common.sh@10 -- # set +x 00:31:54.215 13:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.215 13:55:33 -- bdev/blockdev.sh@619 -- # bdev='[ 00:31:54.215 { 00:31:54.215 "name": "Nvme0n1p1", 00:31:54.215 "aliases": [ 00:31:54.215 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:31:54.215 ], 00:31:54.215 "product_name": "GPT Disk", 00:31:54.215 "block_size": 4096, 00:31:54.215 "num_blocks": 655104, 00:31:54.215 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:31:54.215 "assigned_rate_limits": { 00:31:54.215 "rw_ios_per_sec": 0, 00:31:54.215 "rw_mbytes_per_sec": 0, 00:31:54.215 "r_mbytes_per_sec": 0, 00:31:54.215 "w_mbytes_per_sec": 0 00:31:54.215 }, 00:31:54.215 "claimed": false, 00:31:54.215 "zoned": false, 00:31:54.215 "supported_io_types": { 00:31:54.215 "read": true, 00:31:54.215 "write": true, 00:31:54.215 "unmap": true, 00:31:54.215 "write_zeroes": true, 00:31:54.215 "flush": true, 00:31:54.215 "reset": true, 00:31:54.215 "compare": true, 00:31:54.215 "compare_and_write": false, 00:31:54.215 "abort": true, 00:31:54.215 "nvme_admin": false, 00:31:54.215 "nvme_io": false 00:31:54.215 }, 00:31:54.215 "driver_specific": { 00:31:54.215 "gpt": { 00:31:54.215 "base_bdev": "Nvme0n1", 00:31:54.215 "offset_blocks": 256, 00:31:54.215 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:31:54.215 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:31:54.215 "partition_name": "SPDK_TEST_first" 00:31:54.215 } 00:31:54.215 } 00:31:54.215 } 00:31:54.215 ]' 00:31:54.215 13:55:33 -- bdev/blockdev.sh@620 -- # jq -r length 00:31:54.215 13:55:33 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:31:54.215 13:55:33 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:31:54.473 13:55:33 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:54.473 13:55:33 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:54.473 13:55:33 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:54.473 13:55:33 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:54.473 13:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.473 13:55:33 -- common/autotest_common.sh@10 -- # set +x 00:31:54.473 13:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.473 13:55:33 -- bdev/blockdev.sh@624 -- # bdev='[ 00:31:54.473 { 00:31:54.473 "name": "Nvme0n1p2", 00:31:54.473 "aliases": [ 00:31:54.473 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:31:54.473 ], 00:31:54.473 "product_name": "GPT Disk", 00:31:54.473 "block_size": 4096, 00:31:54.473 "num_blocks": 655103, 00:31:54.473 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:54.473 "assigned_rate_limits": { 00:31:54.473 "rw_ios_per_sec": 0, 00:31:54.473 "rw_mbytes_per_sec": 0, 00:31:54.473 "r_mbytes_per_sec": 0, 00:31:54.473 "w_mbytes_per_sec": 0 00:31:54.473 }, 00:31:54.473 "claimed": false, 00:31:54.473 "zoned": false, 00:31:54.473 "supported_io_types": { 00:31:54.473 "read": true, 00:31:54.473 "write": true, 00:31:54.473 "unmap": true, 00:31:54.473 "write_zeroes": true, 00:31:54.473 "flush": true, 00:31:54.473 "reset": true, 00:31:54.473 "compare": true, 00:31:54.473 "compare_and_write": false, 00:31:54.473 "abort": true, 00:31:54.473 "nvme_admin": false, 00:31:54.473 "nvme_io": false 00:31:54.473 }, 00:31:54.473 "driver_specific": { 00:31:54.473 "gpt": { 00:31:54.473 "base_bdev": "Nvme0n1", 00:31:54.473 "offset_blocks": 655360, 00:31:54.473 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:31:54.473 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:54.473 "partition_name": "SPDK_TEST_second" 00:31:54.473 } 00:31:54.473 } 00:31:54.473 } 00:31:54.473 ]' 00:31:54.473 13:55:33 -- bdev/blockdev.sh@625 -- # jq -r length 00:31:54.473 13:55:33 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:31:54.473 13:55:33 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:31:54.473 13:55:33 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:54.473 13:55:33 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:54.732 13:55:33 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:54.732 13:55:33 -- bdev/blockdev.sh@629 -- # killprocess 143776 00:31:54.732 13:55:33 -- common/autotest_common.sh@926 -- # '[' -z 143776 ']' 00:31:54.732 13:55:33 -- common/autotest_common.sh@930 -- # kill -0 143776 00:31:54.732 13:55:33 -- common/autotest_common.sh@931 -- # uname 00:31:54.732 13:55:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:54.732 13:55:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143776 00:31:54.732 13:55:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:54.732 13:55:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:54.732 13:55:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143776' 00:31:54.732 killing process with pid 143776 00:31:54.732 13:55:33 -- common/autotest_common.sh@945 -- # kill 143776 00:31:54.732 13:55:33 -- common/autotest_common.sh@950 -- # wait 143776 00:31:57.273 00:31:57.273 real 0m4.724s 00:31:57.273 user 0m5.168s 00:31:57.273 sys 0m0.446s 00:31:57.273 13:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.273 13:55:36 -- common/autotest_common.sh@10 -- # set +x 00:31:57.273 ************************************ 00:31:57.273 END TEST bdev_gpt_uuid 00:31:57.273 ************************************ 00:31:57.273 13:55:36 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:31:57.273 13:55:36 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:31:57.273 13:55:36 -- bdev/blockdev.sh@809 -- # cleanup 00:31:57.273 13:55:36 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:57.273 13:55:36 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:57.273 13:55:36 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:31:57.273 13:55:36 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:31:57.273 13:55:36 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:31:57.273 13:55:36 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:57.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:57.531 Waiting for block devices as requested 00:31:57.531 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:57.531 13:55:36 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:31:57.531 13:55:36 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:31:57.531 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:31:57.531 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:31:57.531 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:31:57.531 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:31:57.531 13:55:36 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:31:57.531 00:31:57.531 real 0m57.150s 00:31:57.531 user 1m25.894s 00:31:57.531 sys 0m6.041s 00:31:57.531 13:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.531 13:55:36 -- common/autotest_common.sh@10 -- # set +x 00:31:57.531 ************************************ 00:31:57.531 END TEST blockdev_nvme_gpt 00:31:57.531 ************************************ 00:31:57.789 13:55:36 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:57.789 13:55:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:57.789 13:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:57.789 13:55:36 -- common/autotest_common.sh@10 -- # set +x 00:31:57.789 ************************************ 00:31:57.789 START TEST nvme 00:31:57.789 ************************************ 00:31:57.789 13:55:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:57.789 * Looking for test storage... 00:31:57.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:57.789 13:55:36 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:58.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:58.306 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:59.294 13:55:38 -- nvme/nvme.sh@79 -- # uname 00:31:59.294 13:55:38 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:31:59.294 13:55:38 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:31:59.294 13:55:38 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:31:59.294 13:55:38 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:31:59.294 Waiting for stub to ready for secondary processes... 00:31:59.294 13:55:38 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:31:59.294 13:55:38 -- common/autotest_common.sh@1045 -- # echo 0 00:31:59.294 13:55:38 -- common/autotest_common.sh@1047 -- # stubpid=144248 00:31:59.294 13:55:38 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:31:59.294 13:55:38 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:31:59.294 13:55:38 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:59.294 13:55:38 -- common/autotest_common.sh@1051 -- # [[ -e /proc/144248 ]] 00:31:59.294 13:55:38 -- common/autotest_common.sh@1052 -- # sleep 1s 00:31:59.294 [2024-07-10 13:55:38.533341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:59.294 [2024-07-10 13:55:38.533553] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.233 13:55:39 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:00.233 13:55:39 -- common/autotest_common.sh@1051 -- # [[ -e /proc/144248 ]] 00:32:00.233 13:55:39 -- common/autotest_common.sh@1052 -- # sleep 1s 00:32:00.233 [2024-07-10 13:55:39.567796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:00.492 [2024-07-10 13:55:39.744175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.492 [2024-07-10 13:55:39.744190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.492 [2024-07-10 13:55:39.744190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.492 [2024-07-10 13:55:39.758222] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:32:00.492 [2024-07-10 13:55:39.763875] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:32:00.492 [2024-07-10 13:55:39.764527] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:32:01.429 done. 00:32:01.429 13:55:40 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:01.429 13:55:40 -- common/autotest_common.sh@1054 -- # echo done. 00:32:01.429 13:55:40 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:32:01.429 13:55:40 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:32:01.429 13:55:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.429 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:32:01.429 ************************************ 00:32:01.429 START TEST nvme_reset 00:32:01.429 ************************************ 00:32:01.429 13:55:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:32:01.429 Initializing NVMe Controllers 00:32:01.429 Skipping QEMU NVMe SSD at 0000:00:06.0 00:32:01.429 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:32:01.429 ************************************ 00:32:01.429 END TEST nvme_reset 00:32:01.429 ************************************ 00:32:01.429 00:32:01.429 real 0m0.264s 00:32:01.429 user 0m0.083s 00:32:01.429 sys 0m0.119s 00:32:01.429 13:55:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.429 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:32:01.688 13:55:40 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:32:01.688 13:55:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:01.688 13:55:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.688 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:32:01.688 ************************************ 00:32:01.688 START TEST nvme_identify 00:32:01.688 ************************************ 00:32:01.688 13:55:40 -- common/autotest_common.sh@1104 -- # nvme_identify 00:32:01.688 13:55:40 -- nvme/nvme.sh@12 -- # bdfs=() 00:32:01.688 13:55:40 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:32:01.688 13:55:40 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:32:01.688 13:55:40 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:32:01.688 13:55:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:01.688 13:55:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:01.688 13:55:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:01.688 13:55:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:01.688 13:55:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:01.688 13:55:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:01.688 13:55:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:32:01.688 13:55:40 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:32:01.947 [2024-07-10 13:55:41.156466] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 144281 terminated unexpected 00:32:01.947 ===================================================== 00:32:01.947 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:01.947 ===================================================== 00:32:01.947 Controller Capabilities/Features 00:32:01.947 ================================ 00:32:01.947 Vendor ID: 1b36 00:32:01.947 Subsystem Vendor ID: 1af4 00:32:01.947 Serial Number: 12340 00:32:01.947 Model Number: QEMU NVMe Ctrl 00:32:01.947 Firmware Version: 8.0.0 00:32:01.947 Recommended Arb Burst: 6 00:32:01.947 IEEE OUI Identifier: 00 54 52 00:32:01.947 Multi-path I/O 00:32:01.947 May have multiple subsystem ports: No 00:32:01.947 May have multiple controllers: No 00:32:01.947 Associated with SR-IOV VF: No 00:32:01.947 Max Data Transfer Size: 524288 00:32:01.947 Max Number of Namespaces: 256 00:32:01.947 Max Number of I/O Queues: 64 00:32:01.947 NVMe Specification Version (VS): 1.4 00:32:01.947 NVMe Specification Version (Identify): 1.4 00:32:01.947 Maximum Queue Entries: 2048 00:32:01.947 Contiguous Queues Required: Yes 00:32:01.947 Arbitration Mechanisms Supported 00:32:01.947 Weighted Round Robin: Not Supported 00:32:01.947 Vendor Specific: Not Supported 00:32:01.947 Reset Timeout: 7500 ms 00:32:01.947 Doorbell Stride: 4 bytes 00:32:01.947 NVM Subsystem Reset: Not Supported 00:32:01.947 Command Sets Supported 00:32:01.947 NVM Command Set: Supported 00:32:01.947 Boot Partition: Not Supported 00:32:01.947 Memory Page Size Minimum: 4096 bytes 00:32:01.947 Memory Page Size Maximum: 65536 bytes 00:32:01.947 Persistent Memory Region: Not Supported 00:32:01.947 Optional Asynchronous Events Supported 00:32:01.947 Namespace Attribute Notices: Supported 00:32:01.947 Firmware Activation Notices: Not Supported 00:32:01.947 ANA Change Notices: Not Supported 00:32:01.947 PLE Aggregate Log Change Notices: Not Supported 00:32:01.947 LBA Status Info Alert Notices: Not Supported 00:32:01.947 EGE Aggregate Log Change Notices: Not Supported 00:32:01.947 Normal NVM Subsystem Shutdown event: Not Supported 00:32:01.947 Zone Descriptor Change Notices: Not Supported 00:32:01.947 Discovery Log Change Notices: Not Supported 00:32:01.947 Controller Attributes 00:32:01.947 128-bit Host Identifier: Not Supported 00:32:01.947 Non-Operational Permissive Mode: Not Supported 00:32:01.947 NVM Sets: Not Supported 00:32:01.947 Read Recovery Levels: Not Supported 00:32:01.947 Endurance Groups: Not Supported 00:32:01.947 Predictable Latency Mode: Not Supported 00:32:01.947 Traffic Based Keep ALive: Not Supported 00:32:01.947 Namespace Granularity: Not Supported 00:32:01.948 SQ Associations: Not Supported 00:32:01.948 UUID List: Not Supported 00:32:01.948 Multi-Domain Subsystem: Not Supported 00:32:01.948 Fixed Capacity Management: Not Supported 00:32:01.948 Variable Capacity Management: Not Supported 00:32:01.948 Delete Endurance Group: Not Supported 00:32:01.948 Delete NVM Set: Not Supported 00:32:01.948 Extended LBA Formats Supported: Supported 00:32:01.948 Flexible Data Placement Supported: Not Supported 00:32:01.948 00:32:01.948 Controller Memory Buffer Support 00:32:01.948 ================================ 00:32:01.948 Supported: No 00:32:01.948 00:32:01.948 Persistent Memory Region Support 00:32:01.948 ================================ 00:32:01.948 Supported: No 00:32:01.948 00:32:01.948 Admin Command Set Attributes 00:32:01.948 ============================ 00:32:01.948 Security Send/Receive: Not Supported 00:32:01.948 Format NVM: Supported 00:32:01.948 Firmware Activate/Download: Not Supported 00:32:01.948 Namespace Management: Supported 00:32:01.948 Device Self-Test: Not Supported 00:32:01.948 Directives: Supported 00:32:01.948 NVMe-MI: Not Supported 00:32:01.948 Virtualization Management: Not Supported 00:32:01.948 Doorbell Buffer Config: Supported 00:32:01.948 Get LBA Status Capability: Not Supported 00:32:01.948 Command & Feature Lockdown Capability: Not Supported 00:32:01.948 Abort Command Limit: 4 00:32:01.948 Async Event Request Limit: 4 00:32:01.948 Number of Firmware Slots: N/A 00:32:01.948 Firmware Slot 1 Read-Only: N/A 00:32:01.948 Firmware Activation Without Reset: N/A 00:32:01.948 Multiple Update Detection Support: N/A 00:32:01.948 Firmware Update Granularity: No Information Provided 00:32:01.948 Per-Namespace SMART Log: Yes 00:32:01.948 Asymmetric Namespace Access Log Page: Not Supported 00:32:01.948 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:01.948 Command Effects Log Page: Supported 00:32:01.948 Get Log Page Extended Data: Supported 00:32:01.948 Telemetry Log Pages: Not Supported 00:32:01.948 Persistent Event Log Pages: Not Supported 00:32:01.948 Supported Log Pages Log Page: May Support 00:32:01.948 Commands Supported & Effects Log Page: Not Supported 00:32:01.948 Feature Identifiers & Effects Log Page:May Support 00:32:01.948 NVMe-MI Commands & Effects Log Page: May Support 00:32:01.948 Data Area 4 for Telemetry Log: Not Supported 00:32:01.948 Error Log Page Entries Supported: 1 00:32:01.948 Keep Alive: Not Supported 00:32:01.948 00:32:01.948 NVM Command Set Attributes 00:32:01.948 ========================== 00:32:01.948 Submission Queue Entry Size 00:32:01.948 Max: 64 00:32:01.948 Min: 64 00:32:01.948 Completion Queue Entry Size 00:32:01.948 Max: 16 00:32:01.948 Min: 16 00:32:01.948 Number of Namespaces: 256 00:32:01.948 Compare Command: Supported 00:32:01.948 Write Uncorrectable Command: Not Supported 00:32:01.948 Dataset Management Command: Supported 00:32:01.948 Write Zeroes Command: Supported 00:32:01.948 Set Features Save Field: Supported 00:32:01.948 Reservations: Not Supported 00:32:01.948 Timestamp: Supported 00:32:01.948 Copy: Supported 00:32:01.948 Volatile Write Cache: Present 00:32:01.948 Atomic Write Unit (Normal): 1 00:32:01.948 Atomic Write Unit (PFail): 1 00:32:01.948 Atomic Compare & Write Unit: 1 00:32:01.948 Fused Compare & Write: Not Supported 00:32:01.948 Scatter-Gather List 00:32:01.948 SGL Command Set: Supported 00:32:01.948 SGL Keyed: Not Supported 00:32:01.948 SGL Bit Bucket Descriptor: Not Supported 00:32:01.948 SGL Metadata Pointer: Not Supported 00:32:01.948 Oversized SGL: Not Supported 00:32:01.948 SGL Metadata Address: Not Supported 00:32:01.948 SGL Offset: Not Supported 00:32:01.948 Transport SGL Data Block: Not Supported 00:32:01.948 Replay Protected Memory Block: Not Supported 00:32:01.948 00:32:01.948 Firmware Slot Information 00:32:01.948 ========================= 00:32:01.948 Active slot: 1 00:32:01.948 Slot 1 Firmware Revision: 1.0 00:32:01.948 00:32:01.948 00:32:01.948 Commands Supported and Effects 00:32:01.948 ============================== 00:32:01.948 Admin Commands 00:32:01.948 -------------- 00:32:01.948 Delete I/O Submission Queue (00h): Supported 00:32:01.948 Create I/O Submission Queue (01h): Supported 00:32:01.948 Get Log Page (02h): Supported 00:32:01.948 Delete I/O Completion Queue (04h): Supported 00:32:01.948 Create I/O Completion Queue (05h): Supported 00:32:01.948 Identify (06h): Supported 00:32:01.948 Abort (08h): Supported 00:32:01.948 Set Features (09h): Supported 00:32:01.948 Get Features (0Ah): Supported 00:32:01.948 Asynchronous Event Request (0Ch): Supported 00:32:01.948 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:01.948 Directive Send (19h): Supported 00:32:01.948 Directive Receive (1Ah): Supported 00:32:01.948 Virtualization Management (1Ch): Supported 00:32:01.948 Doorbell Buffer Config (7Ch): Supported 00:32:01.948 Format NVM (80h): Supported LBA-Change 00:32:01.948 I/O Commands 00:32:01.948 ------------ 00:32:01.948 Flush (00h): Supported LBA-Change 00:32:01.948 Write (01h): Supported LBA-Change 00:32:01.948 Read (02h): Supported 00:32:01.948 Compare (05h): Supported 00:32:01.948 Write Zeroes (08h): Supported LBA-Change 00:32:01.948 Dataset Management (09h): Supported LBA-Change 00:32:01.948 Unknown (0Ch): Supported 00:32:01.948 Unknown (12h): Supported 00:32:01.948 Copy (19h): Supported LBA-Change 00:32:01.948 Unknown (1Dh): Supported LBA-Change 00:32:01.948 00:32:01.948 Error Log 00:32:01.948 ========= 00:32:01.948 00:32:01.948 Arbitration 00:32:01.948 =========== 00:32:01.948 Arbitration Burst: no limit 00:32:01.948 00:32:01.948 Power Management 00:32:01.948 ================ 00:32:01.948 Number of Power States: 1 00:32:01.948 Current Power State: Power State #0 00:32:01.948 Power State #0: 00:32:01.948 Max Power: 25.00 W 00:32:01.948 Non-Operational State: Operational 00:32:01.948 Entry Latency: 16 microseconds 00:32:01.948 Exit Latency: 4 microseconds 00:32:01.948 Relative Read Throughput: 0 00:32:01.948 Relative Read Latency: 0 00:32:01.948 Relative Write Throughput: 0 00:32:01.948 Relative Write Latency: 0 00:32:01.948 Idle Power: Not Reported 00:32:01.948 Active Power: Not Reported 00:32:01.948 Non-Operational Permissive Mode: Not Supported 00:32:01.948 00:32:01.948 Health Information 00:32:01.948 ================== 00:32:01.948 Critical Warnings: 00:32:01.948 Available Spare Space: OK 00:32:01.948 Temperature: OK 00:32:01.948 Device Reliability: OK 00:32:01.948 Read Only: No 00:32:01.948 Volatile Memory Backup: OK 00:32:01.948 Current Temperature: 323 Kelvin (50 Celsius) 00:32:01.948 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:01.948 Available Spare: 0% 00:32:01.948 Available Spare Threshold: 0% 00:32:01.948 Life Percentage Used: 0% 00:32:01.948 Data Units Read: 8013 00:32:01.948 Data Units Written: 3897 00:32:01.948 Host Read Commands: 383360 00:32:01.948 Host Write Commands: 206766 00:32:01.948 Controller Busy Time: 0 minutes 00:32:01.948 Power Cycles: 0 00:32:01.948 Power On Hours: 0 hours 00:32:01.948 Unsafe Shutdowns: 0 00:32:01.948 Unrecoverable Media Errors: 0 00:32:01.948 Lifetime Error Log Entries: 0 00:32:01.948 Warning Temperature Time: 0 minutes 00:32:01.948 Critical Temperature Time: 0 minutes 00:32:01.948 00:32:01.948 Number of Queues 00:32:01.948 ================ 00:32:01.948 Number of I/O Submission Queues: 64 00:32:01.948 Number of I/O Completion Queues: 64 00:32:01.948 00:32:01.948 ZNS Specific Controller Data 00:32:01.948 ============================ 00:32:01.948 Zone Append Size Limit: 0 00:32:01.948 00:32:01.948 00:32:01.948 Active Namespaces 00:32:01.948 ================= 00:32:01.948 Namespace ID:1 00:32:01.948 Error Recovery Timeout: Unlimited 00:32:01.948 Command Set Identifier: NVM (00h) 00:32:01.948 Deallocate: Supported 00:32:01.948 Deallocated/Unwritten Error: Supported 00:32:01.948 Deallocated Read Value: All 0x00 00:32:01.948 Deallocate in Write Zeroes: Not Supported 00:32:01.948 Deallocated Guard Field: 0xFFFF 00:32:01.948 Flush: Supported 00:32:01.948 Reservation: Not Supported 00:32:01.948 Namespace Sharing Capabilities: Private 00:32:01.948 Size (in LBAs): 1310720 (5GiB) 00:32:01.948 Capacity (in LBAs): 1310720 (5GiB) 00:32:01.948 Utilization (in LBAs): 1310720 (5GiB) 00:32:01.948 Thin Provisioning: Not Supported 00:32:01.948 Per-NS Atomic Units: No 00:32:01.948 Maximum Single Source Range Length: 128 00:32:01.948 Maximum Copy Length: 128 00:32:01.948 Maximum Source Range Count: 128 00:32:01.948 NGUID/EUI64 Never Reused: No 00:32:01.948 Namespace Write Protected: No 00:32:01.948 Number of LBA Formats: 8 00:32:01.948 Current LBA Format: LBA Format #04 00:32:01.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:01.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:01.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:01.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:01.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:01.949 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:01.949 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:01.949 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:01.949 00:32:01.949 13:55:41 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:01.949 13:55:41 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:32:02.207 ===================================================== 00:32:02.207 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:02.207 ===================================================== 00:32:02.207 Controller Capabilities/Features 00:32:02.207 ================================ 00:32:02.207 Vendor ID: 1b36 00:32:02.207 Subsystem Vendor ID: 1af4 00:32:02.207 Serial Number: 12340 00:32:02.207 Model Number: QEMU NVMe Ctrl 00:32:02.207 Firmware Version: 8.0.0 00:32:02.207 Recommended Arb Burst: 6 00:32:02.207 IEEE OUI Identifier: 00 54 52 00:32:02.207 Multi-path I/O 00:32:02.207 May have multiple subsystem ports: No 00:32:02.207 May have multiple controllers: No 00:32:02.207 Associated with SR-IOV VF: No 00:32:02.207 Max Data Transfer Size: 524288 00:32:02.207 Max Number of Namespaces: 256 00:32:02.207 Max Number of I/O Queues: 64 00:32:02.207 NVMe Specification Version (VS): 1.4 00:32:02.207 NVMe Specification Version (Identify): 1.4 00:32:02.207 Maximum Queue Entries: 2048 00:32:02.207 Contiguous Queues Required: Yes 00:32:02.207 Arbitration Mechanisms Supported 00:32:02.207 Weighted Round Robin: Not Supported 00:32:02.207 Vendor Specific: Not Supported 00:32:02.207 Reset Timeout: 7500 ms 00:32:02.207 Doorbell Stride: 4 bytes 00:32:02.207 NVM Subsystem Reset: Not Supported 00:32:02.207 Command Sets Supported 00:32:02.207 NVM Command Set: Supported 00:32:02.207 Boot Partition: Not Supported 00:32:02.207 Memory Page Size Minimum: 4096 bytes 00:32:02.207 Memory Page Size Maximum: 65536 bytes 00:32:02.207 Persistent Memory Region: Not Supported 00:32:02.207 Optional Asynchronous Events Supported 00:32:02.207 Namespace Attribute Notices: Supported 00:32:02.207 Firmware Activation Notices: Not Supported 00:32:02.207 ANA Change Notices: Not Supported 00:32:02.207 PLE Aggregate Log Change Notices: Not Supported 00:32:02.207 LBA Status Info Alert Notices: Not Supported 00:32:02.207 EGE Aggregate Log Change Notices: Not Supported 00:32:02.207 Normal NVM Subsystem Shutdown event: Not Supported 00:32:02.207 Zone Descriptor Change Notices: Not Supported 00:32:02.207 Discovery Log Change Notices: Not Supported 00:32:02.207 Controller Attributes 00:32:02.207 128-bit Host Identifier: Not Supported 00:32:02.207 Non-Operational Permissive Mode: Not Supported 00:32:02.207 NVM Sets: Not Supported 00:32:02.207 Read Recovery Levels: Not Supported 00:32:02.207 Endurance Groups: Not Supported 00:32:02.207 Predictable Latency Mode: Not Supported 00:32:02.207 Traffic Based Keep ALive: Not Supported 00:32:02.207 Namespace Granularity: Not Supported 00:32:02.207 SQ Associations: Not Supported 00:32:02.207 UUID List: Not Supported 00:32:02.207 Multi-Domain Subsystem: Not Supported 00:32:02.207 Fixed Capacity Management: Not Supported 00:32:02.207 Variable Capacity Management: Not Supported 00:32:02.207 Delete Endurance Group: Not Supported 00:32:02.207 Delete NVM Set: Not Supported 00:32:02.207 Extended LBA Formats Supported: Supported 00:32:02.207 Flexible Data Placement Supported: Not Supported 00:32:02.207 00:32:02.207 Controller Memory Buffer Support 00:32:02.207 ================================ 00:32:02.207 Supported: No 00:32:02.207 00:32:02.207 Persistent Memory Region Support 00:32:02.207 ================================ 00:32:02.207 Supported: No 00:32:02.207 00:32:02.207 Admin Command Set Attributes 00:32:02.207 ============================ 00:32:02.207 Security Send/Receive: Not Supported 00:32:02.207 Format NVM: Supported 00:32:02.207 Firmware Activate/Download: Not Supported 00:32:02.207 Namespace Management: Supported 00:32:02.207 Device Self-Test: Not Supported 00:32:02.207 Directives: Supported 00:32:02.207 NVMe-MI: Not Supported 00:32:02.207 Virtualization Management: Not Supported 00:32:02.207 Doorbell Buffer Config: Supported 00:32:02.207 Get LBA Status Capability: Not Supported 00:32:02.207 Command & Feature Lockdown Capability: Not Supported 00:32:02.207 Abort Command Limit: 4 00:32:02.207 Async Event Request Limit: 4 00:32:02.207 Number of Firmware Slots: N/A 00:32:02.207 Firmware Slot 1 Read-Only: N/A 00:32:02.207 Firmware Activation Without Reset: N/A 00:32:02.207 Multiple Update Detection Support: N/A 00:32:02.207 Firmware Update Granularity: No Information Provided 00:32:02.207 Per-Namespace SMART Log: Yes 00:32:02.208 Asymmetric Namespace Access Log Page: Not Supported 00:32:02.208 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:02.208 Command Effects Log Page: Supported 00:32:02.208 Get Log Page Extended Data: Supported 00:32:02.208 Telemetry Log Pages: Not Supported 00:32:02.208 Persistent Event Log Pages: Not Supported 00:32:02.208 Supported Log Pages Log Page: May Support 00:32:02.208 Commands Supported & Effects Log Page: Not Supported 00:32:02.208 Feature Identifiers & Effects Log Page:May Support 00:32:02.208 NVMe-MI Commands & Effects Log Page: May Support 00:32:02.208 Data Area 4 for Telemetry Log: Not Supported 00:32:02.208 Error Log Page Entries Supported: 1 00:32:02.208 Keep Alive: Not Supported 00:32:02.208 00:32:02.208 NVM Command Set Attributes 00:32:02.208 ========================== 00:32:02.208 Submission Queue Entry Size 00:32:02.208 Max: 64 00:32:02.208 Min: 64 00:32:02.208 Completion Queue Entry Size 00:32:02.208 Max: 16 00:32:02.208 Min: 16 00:32:02.208 Number of Namespaces: 256 00:32:02.208 Compare Command: Supported 00:32:02.208 Write Uncorrectable Command: Not Supported 00:32:02.208 Dataset Management Command: Supported 00:32:02.208 Write Zeroes Command: Supported 00:32:02.208 Set Features Save Field: Supported 00:32:02.208 Reservations: Not Supported 00:32:02.208 Timestamp: Supported 00:32:02.208 Copy: Supported 00:32:02.208 Volatile Write Cache: Present 00:32:02.208 Atomic Write Unit (Normal): 1 00:32:02.208 Atomic Write Unit (PFail): 1 00:32:02.208 Atomic Compare & Write Unit: 1 00:32:02.208 Fused Compare & Write: Not Supported 00:32:02.208 Scatter-Gather List 00:32:02.208 SGL Command Set: Supported 00:32:02.208 SGL Keyed: Not Supported 00:32:02.208 SGL Bit Bucket Descriptor: Not Supported 00:32:02.208 SGL Metadata Pointer: Not Supported 00:32:02.208 Oversized SGL: Not Supported 00:32:02.208 SGL Metadata Address: Not Supported 00:32:02.208 SGL Offset: Not Supported 00:32:02.208 Transport SGL Data Block: Not Supported 00:32:02.208 Replay Protected Memory Block: Not Supported 00:32:02.208 00:32:02.208 Firmware Slot Information 00:32:02.208 ========================= 00:32:02.208 Active slot: 1 00:32:02.208 Slot 1 Firmware Revision: 1.0 00:32:02.208 00:32:02.208 00:32:02.208 Commands Supported and Effects 00:32:02.208 ============================== 00:32:02.208 Admin Commands 00:32:02.208 -------------- 00:32:02.208 Delete I/O Submission Queue (00h): Supported 00:32:02.208 Create I/O Submission Queue (01h): Supported 00:32:02.208 Get Log Page (02h): Supported 00:32:02.208 Delete I/O Completion Queue (04h): Supported 00:32:02.208 Create I/O Completion Queue (05h): Supported 00:32:02.208 Identify (06h): Supported 00:32:02.208 Abort (08h): Supported 00:32:02.208 Set Features (09h): Supported 00:32:02.208 Get Features (0Ah): Supported 00:32:02.208 Asynchronous Event Request (0Ch): Supported 00:32:02.208 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:02.208 Directive Send (19h): Supported 00:32:02.208 Directive Receive (1Ah): Supported 00:32:02.208 Virtualization Management (1Ch): Supported 00:32:02.208 Doorbell Buffer Config (7Ch): Supported 00:32:02.208 Format NVM (80h): Supported LBA-Change 00:32:02.208 I/O Commands 00:32:02.208 ------------ 00:32:02.208 Flush (00h): Supported LBA-Change 00:32:02.208 Write (01h): Supported LBA-Change 00:32:02.208 Read (02h): Supported 00:32:02.208 Compare (05h): Supported 00:32:02.208 Write Zeroes (08h): Supported LBA-Change 00:32:02.208 Dataset Management (09h): Supported LBA-Change 00:32:02.208 Unknown (0Ch): Supported 00:32:02.208 Unknown (12h): Supported 00:32:02.208 Copy (19h): Supported LBA-Change 00:32:02.208 Unknown (1Dh): Supported LBA-Change 00:32:02.208 00:32:02.208 Error Log 00:32:02.208 ========= 00:32:02.208 00:32:02.208 Arbitration 00:32:02.208 =========== 00:32:02.208 Arbitration Burst: no limit 00:32:02.208 00:32:02.208 Power Management 00:32:02.208 ================ 00:32:02.208 Number of Power States: 1 00:32:02.208 Current Power State: Power State #0 00:32:02.208 Power State #0: 00:32:02.208 Max Power: 25.00 W 00:32:02.208 Non-Operational State: Operational 00:32:02.208 Entry Latency: 16 microseconds 00:32:02.208 Exit Latency: 4 microseconds 00:32:02.208 Relative Read Throughput: 0 00:32:02.208 Relative Read Latency: 0 00:32:02.208 Relative Write Throughput: 0 00:32:02.208 Relative Write Latency: 0 00:32:02.208 Idle Power: Not Reported 00:32:02.208 Active Power: Not Reported 00:32:02.208 Non-Operational Permissive Mode: Not Supported 00:32:02.208 00:32:02.208 Health Information 00:32:02.208 ================== 00:32:02.208 Critical Warnings: 00:32:02.208 Available Spare Space: OK 00:32:02.208 Temperature: OK 00:32:02.208 Device Reliability: OK 00:32:02.208 Read Only: No 00:32:02.208 Volatile Memory Backup: OK 00:32:02.208 Current Temperature: 323 Kelvin (50 Celsius) 00:32:02.208 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:02.208 Available Spare: 0% 00:32:02.208 Available Spare Threshold: 0% 00:32:02.208 Life Percentage Used: 0% 00:32:02.208 Data Units Read: 8013 00:32:02.208 Data Units Written: 3897 00:32:02.208 Host Read Commands: 383360 00:32:02.208 Host Write Commands: 206766 00:32:02.208 Controller Busy Time: 0 minutes 00:32:02.208 Power Cycles: 0 00:32:02.208 Power On Hours: 0 hours 00:32:02.208 Unsafe Shutdowns: 0 00:32:02.208 Unrecoverable Media Errors: 0 00:32:02.208 Lifetime Error Log Entries: 0 00:32:02.208 Warning Temperature Time: 0 minutes 00:32:02.208 Critical Temperature Time: 0 minutes 00:32:02.208 00:32:02.208 Number of Queues 00:32:02.208 ================ 00:32:02.208 Number of I/O Submission Queues: 64 00:32:02.208 Number of I/O Completion Queues: 64 00:32:02.208 00:32:02.208 ZNS Specific Controller Data 00:32:02.208 ============================ 00:32:02.208 Zone Append Size Limit: 0 00:32:02.208 00:32:02.208 00:32:02.208 Active Namespaces 00:32:02.208 ================= 00:32:02.208 Namespace ID:1 00:32:02.208 Error Recovery Timeout: Unlimited 00:32:02.208 Command Set Identifier: NVM (00h) 00:32:02.208 Deallocate: Supported 00:32:02.208 Deallocated/Unwritten Error: Supported 00:32:02.208 Deallocated Read Value: All 0x00 00:32:02.208 Deallocate in Write Zeroes: Not Supported 00:32:02.208 Deallocated Guard Field: 0xFFFF 00:32:02.208 Flush: Supported 00:32:02.208 Reservation: Not Supported 00:32:02.208 Namespace Sharing Capabilities: Private 00:32:02.208 Size (in LBAs): 1310720 (5GiB) 00:32:02.208 Capacity (in LBAs): 1310720 (5GiB) 00:32:02.208 Utilization (in LBAs): 1310720 (5GiB) 00:32:02.208 Thin Provisioning: Not Supported 00:32:02.208 Per-NS Atomic Units: No 00:32:02.208 Maximum Single Source Range Length: 128 00:32:02.208 Maximum Copy Length: 128 00:32:02.208 Maximum Source Range Count: 128 00:32:02.208 NGUID/EUI64 Never Reused: No 00:32:02.208 Namespace Write Protected: No 00:32:02.208 Number of LBA Formats: 8 00:32:02.208 Current LBA Format: LBA Format #04 00:32:02.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:02.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:02.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:02.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:02.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:02.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:02.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:02.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:02.208 00:32:02.208 ************************************ 00:32:02.208 END TEST nvme_identify 00:32:02.208 ************************************ 00:32:02.208 00:32:02.208 real 0m0.701s 00:32:02.208 user 0m0.274s 00:32:02.208 sys 0m0.310s 00:32:02.208 13:55:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.208 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:32:02.466 13:55:41 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:32:02.466 13:55:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:02.466 13:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:02.466 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:32:02.466 ************************************ 00:32:02.466 START TEST nvme_perf 00:32:02.466 ************************************ 00:32:02.466 13:55:41 -- common/autotest_common.sh@1104 -- # nvme_perf 00:32:02.466 13:55:41 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:32:03.837 Initializing NVMe Controllers 00:32:03.837 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:03.837 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:03.837 Initialization complete. Launching workers. 00:32:03.837 ======================================================== 00:32:03.837 Latency(us) 00:32:03.837 Device Information : IOPS MiB/s Average min max 00:32:03.837 PCIE (0000:00:06.0) NSID 1 from core 0: 52863.95 619.50 2420.84 1195.41 7149.08 00:32:03.837 ======================================================== 00:32:03.837 Total : 52863.95 619.50 2420.84 1195.41 7149.08 00:32:03.837 00:32:03.837 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:03.837 ================================================================================= 00:32:03.837 1.00000% : 1352.217us 00:32:03.837 10.00000% : 1616.936us 00:32:03.837 25.00000% : 1931.738us 00:32:03.837 50.00000% : 2389.631us 00:32:03.837 75.00000% : 2804.597us 00:32:03.837 90.00000% : 3176.636us 00:32:03.837 95.00000% : 3577.293us 00:32:03.837 98.00000% : 4121.041us 00:32:03.837 99.00000% : 4750.645us 00:32:03.837 99.50000% : 5380.248us 00:32:03.837 99.90000% : 6524.982us 00:32:03.837 99.99000% : 6982.875us 00:32:03.837 99.99900% : 7154.585us 00:32:03.837 99.99990% : 7154.585us 00:32:03.837 99.99999% : 7154.585us 00:32:03.837 00:32:03.837 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:03.837 ============================================================================== 00:32:03.837 Range in us Cumulative IO count 00:32:03.837 1194.816 - 1201.970: 0.0038% ( 2) 00:32:03.837 1201.970 - 1209.125: 0.0057% ( 1) 00:32:03.837 1209.125 - 1216.279: 0.0151% ( 5) 00:32:03.837 1216.279 - 1223.434: 0.0208% ( 3) 00:32:03.837 1223.434 - 1230.589: 0.0303% ( 5) 00:32:03.837 1230.589 - 1237.743: 0.0359% ( 3) 00:32:03.837 1237.743 - 1244.898: 0.0492% ( 7) 00:32:03.837 1244.898 - 1252.052: 0.0700% ( 11) 00:32:03.837 1252.052 - 1259.207: 0.0851% ( 8) 00:32:03.837 1259.207 - 1266.362: 0.1078% ( 12) 00:32:03.837 1266.362 - 1273.516: 0.1457% ( 20) 00:32:03.837 1273.516 - 1280.671: 0.1892% ( 23) 00:32:03.837 1280.671 - 1287.825: 0.2270% ( 20) 00:32:03.837 1287.825 - 1294.980: 0.2781% ( 27) 00:32:03.837 1294.980 - 1302.134: 0.3386% ( 32) 00:32:03.837 1302.134 - 1309.289: 0.4048% ( 35) 00:32:03.837 1309.289 - 1316.444: 0.4824% ( 41) 00:32:03.837 1316.444 - 1323.598: 0.5694% ( 46) 00:32:03.837 1323.598 - 1330.753: 0.6678% ( 52) 00:32:03.838 1330.753 - 1337.907: 0.7680% ( 53) 00:32:03.838 1337.907 - 1345.062: 0.9099% ( 75) 00:32:03.838 1345.062 - 1352.217: 1.0291% ( 63) 00:32:03.838 1352.217 - 1359.371: 1.1917% ( 86) 00:32:03.838 1359.371 - 1366.526: 1.3336% ( 75) 00:32:03.838 1366.526 - 1373.680: 1.5020% ( 89) 00:32:03.838 1373.680 - 1380.835: 1.6552% ( 81) 00:32:03.838 1380.835 - 1387.990: 1.8368% ( 96) 00:32:03.838 1387.990 - 1395.144: 2.0108% ( 92) 00:32:03.838 1395.144 - 1402.299: 2.1867% ( 93) 00:32:03.838 1402.299 - 1409.453: 2.3986% ( 112) 00:32:03.838 1409.453 - 1416.608: 2.5670% ( 89) 00:32:03.838 1416.608 - 1423.762: 2.7845% ( 115) 00:32:03.838 1423.762 - 1430.917: 2.9831% ( 105) 00:32:03.838 1430.917 - 1438.072: 3.2290% ( 130) 00:32:03.838 1438.072 - 1445.226: 3.4371% ( 110) 00:32:03.838 1445.226 - 1452.381: 3.6679% ( 122) 00:32:03.838 1452.381 - 1459.535: 3.9100% ( 128) 00:32:03.838 1459.535 - 1466.690: 4.1219% ( 112) 00:32:03.838 1466.690 - 1473.845: 4.3943% ( 144) 00:32:03.838 1473.845 - 1480.999: 4.6175% ( 118) 00:32:03.838 1480.999 - 1488.154: 4.9277% ( 164) 00:32:03.838 1488.154 - 1495.308: 5.1585% ( 122) 00:32:03.838 1495.308 - 1502.463: 5.4404% ( 149) 00:32:03.838 1502.463 - 1509.617: 5.7166% ( 146) 00:32:03.838 1509.617 - 1516.772: 5.9814% ( 140) 00:32:03.838 1516.772 - 1523.927: 6.2689% ( 152) 00:32:03.838 1523.927 - 1531.081: 6.5394% ( 143) 00:32:03.838 1531.081 - 1538.236: 6.8497% ( 164) 00:32:03.838 1538.236 - 1545.390: 7.1315% ( 149) 00:32:03.838 1545.390 - 1552.545: 7.4436% ( 165) 00:32:03.838 1552.545 - 1559.700: 7.7387% ( 156) 00:32:03.838 1559.700 - 1566.854: 8.0319% ( 155) 00:32:03.838 1566.854 - 1574.009: 8.3516% ( 169) 00:32:03.838 1574.009 - 1581.163: 8.6524% ( 159) 00:32:03.838 1581.163 - 1588.318: 8.9778% ( 172) 00:32:03.838 1588.318 - 1595.472: 9.2691% ( 154) 00:32:03.838 1595.472 - 1602.627: 9.6001% ( 175) 00:32:03.838 1602.627 - 1609.782: 9.9122% ( 165) 00:32:03.838 1609.782 - 1616.936: 10.2376% ( 172) 00:32:03.838 1616.936 - 1624.091: 10.5630% ( 172) 00:32:03.838 1624.091 - 1631.245: 10.8789% ( 167) 00:32:03.838 1631.245 - 1638.400: 11.2080% ( 174) 00:32:03.838 1638.400 - 1645.555: 11.5182% ( 164) 00:32:03.838 1645.555 - 1652.709: 11.8776% ( 190) 00:32:03.838 1652.709 - 1659.864: 12.1973% ( 169) 00:32:03.838 1659.864 - 1667.018: 12.5549% ( 189) 00:32:03.838 1667.018 - 1674.173: 12.8859% ( 175) 00:32:03.838 1674.173 - 1681.328: 13.2056% ( 169) 00:32:03.838 1681.328 - 1688.482: 13.5518% ( 183) 00:32:03.838 1688.482 - 1695.637: 13.8828% ( 175) 00:32:03.838 1695.637 - 1702.791: 14.2384% ( 188) 00:32:03.838 1702.791 - 1709.946: 14.5600% ( 170) 00:32:03.838 1709.946 - 1717.100: 14.9175% ( 189) 00:32:03.838 1717.100 - 1724.255: 15.2694% ( 186) 00:32:03.838 1724.255 - 1731.410: 15.5834% ( 166) 00:32:03.838 1731.410 - 1738.564: 15.9390% ( 188) 00:32:03.838 1738.564 - 1745.719: 16.2663% ( 173) 00:32:03.838 1745.719 - 1752.873: 16.6257% ( 190) 00:32:03.838 1752.873 - 1760.028: 16.9681% ( 181) 00:32:03.838 1760.028 - 1767.183: 17.3275% ( 190) 00:32:03.838 1767.183 - 1774.337: 17.6793% ( 186) 00:32:03.838 1774.337 - 1781.492: 18.0331% ( 187) 00:32:03.838 1781.492 - 1788.646: 18.3868% ( 187) 00:32:03.838 1788.646 - 1795.801: 18.7368% ( 185) 00:32:03.838 1795.801 - 1802.955: 19.1075% ( 196) 00:32:03.838 1802.955 - 1810.110: 19.4613% ( 187) 00:32:03.838 1810.110 - 1817.265: 19.8301% ( 195) 00:32:03.838 1817.265 - 1824.419: 20.1971% ( 194) 00:32:03.838 1824.419 - 1831.574: 20.5584% ( 191) 00:32:03.838 1831.574 - 1845.883: 21.2735% ( 378) 00:32:03.838 1845.883 - 1860.192: 21.9998% ( 384) 00:32:03.838 1860.192 - 1874.501: 22.7660% ( 405) 00:32:03.838 1874.501 - 1888.810: 23.5094% ( 393) 00:32:03.838 1888.810 - 1903.120: 24.2566% ( 395) 00:32:03.838 1903.120 - 1917.429: 24.9905% ( 388) 00:32:03.838 1917.429 - 1931.738: 25.7340% ( 393) 00:32:03.838 1931.738 - 1946.047: 26.4982% ( 404) 00:32:03.838 1946.047 - 1960.356: 27.2548% ( 400) 00:32:03.838 1960.356 - 1974.666: 28.0229% ( 406) 00:32:03.838 1974.666 - 1988.975: 28.7965% ( 409) 00:32:03.838 1988.975 - 2003.284: 29.5645% ( 406) 00:32:03.838 2003.284 - 2017.593: 30.3080% ( 393) 00:32:03.838 2017.593 - 2031.902: 31.0873% ( 412) 00:32:03.838 2031.902 - 2046.211: 31.8724% ( 415) 00:32:03.838 2046.211 - 2060.521: 32.6479% ( 410) 00:32:03.838 2060.521 - 2074.830: 33.4254% ( 411) 00:32:03.838 2074.830 - 2089.139: 34.2123% ( 416) 00:32:03.838 2089.139 - 2103.448: 34.9898% ( 411) 00:32:03.838 2103.448 - 2117.757: 35.7540% ( 404) 00:32:03.838 2117.757 - 2132.066: 36.5182% ( 404) 00:32:03.838 2132.066 - 2146.376: 37.2919% ( 409) 00:32:03.838 2146.376 - 2160.685: 38.0845% ( 419) 00:32:03.838 2160.685 - 2174.994: 38.8923% ( 427) 00:32:03.838 2174.994 - 2189.303: 39.6830% ( 418) 00:32:03.838 2189.303 - 2203.612: 40.4812% ( 422) 00:32:03.838 2203.612 - 2217.921: 41.2455% ( 404) 00:32:03.838 2217.921 - 2232.231: 42.0286% ( 414) 00:32:03.838 2232.231 - 2246.540: 42.8231% ( 420) 00:32:03.838 2246.540 - 2260.849: 43.6119% ( 417) 00:32:03.838 2260.849 - 2275.158: 44.4121% ( 423) 00:32:03.838 2275.158 - 2289.467: 45.2217% ( 428) 00:32:03.838 2289.467 - 2303.776: 46.0219% ( 423) 00:32:03.838 2303.776 - 2318.086: 46.8050% ( 414) 00:32:03.838 2318.086 - 2332.395: 47.5749% ( 407) 00:32:03.838 2332.395 - 2346.704: 48.3751% ( 423) 00:32:03.838 2346.704 - 2361.013: 49.1715% ( 421) 00:32:03.838 2361.013 - 2375.322: 49.9962% ( 436) 00:32:03.838 2375.322 - 2389.631: 50.8285% ( 440) 00:32:03.838 2389.631 - 2403.941: 51.6438% ( 431) 00:32:03.838 2403.941 - 2418.250: 52.4856% ( 445) 00:32:03.838 2418.250 - 2432.559: 53.2990% ( 430) 00:32:03.838 2432.559 - 2446.868: 54.1200% ( 434) 00:32:03.838 2446.868 - 2461.177: 54.9334% ( 430) 00:32:03.838 2461.177 - 2475.486: 55.7525% ( 433) 00:32:03.838 2475.486 - 2489.796: 56.6037% ( 450) 00:32:03.838 2489.796 - 2504.105: 57.4512% ( 448) 00:32:03.838 2504.105 - 2518.414: 58.2949% ( 446) 00:32:03.838 2518.414 - 2532.723: 59.1537% ( 454) 00:32:03.838 2532.723 - 2547.032: 59.9614% ( 427) 00:32:03.838 2547.032 - 2561.341: 60.7900% ( 438) 00:32:03.838 2561.341 - 2575.651: 61.6261% ( 442) 00:32:03.838 2575.651 - 2589.960: 62.4735% ( 448) 00:32:03.838 2589.960 - 2604.269: 63.3342% ( 455) 00:32:03.838 2604.269 - 2618.578: 64.2138% ( 465) 00:32:03.838 2618.578 - 2632.887: 65.0537% ( 444) 00:32:03.838 2632.887 - 2647.197: 65.8993% ( 447) 00:32:03.838 2647.197 - 2661.506: 66.7316% ( 440) 00:32:03.838 2661.506 - 2675.815: 67.5847% ( 451) 00:32:03.838 2675.815 - 2690.124: 68.3963% ( 429) 00:32:03.838 2690.124 - 2704.433: 69.2513% ( 452) 00:32:03.838 2704.433 - 2718.742: 70.0893% ( 443) 00:32:03.838 2718.742 - 2733.052: 70.9708% ( 466) 00:32:03.838 2733.052 - 2747.361: 71.8126% ( 445) 00:32:03.838 2747.361 - 2761.670: 72.6525% ( 444) 00:32:03.838 2761.670 - 2775.979: 73.5018% ( 449) 00:32:03.838 2775.979 - 2790.288: 74.3474% ( 447) 00:32:03.838 2790.288 - 2804.597: 75.1873% ( 444) 00:32:03.838 2804.597 - 2818.907: 76.0442% ( 453) 00:32:03.838 2818.907 - 2833.216: 76.9011% ( 453) 00:32:03.838 2833.216 - 2847.525: 77.7448% ( 446) 00:32:03.838 2847.525 - 2861.834: 78.5695% ( 436) 00:32:03.838 2861.834 - 2876.143: 79.3905% ( 434) 00:32:03.838 2876.143 - 2890.452: 80.1831% ( 419) 00:32:03.838 2890.452 - 2904.762: 80.9568% ( 409) 00:32:03.838 2904.762 - 2919.071: 81.7116% ( 399) 00:32:03.838 2919.071 - 2933.380: 82.4436% ( 387) 00:32:03.838 2933.380 - 2947.689: 83.1379% ( 367) 00:32:03.838 2947.689 - 2961.998: 83.7943% ( 347) 00:32:03.838 2961.998 - 2976.307: 84.3901% ( 315) 00:32:03.838 2976.307 - 2990.617: 84.9690% ( 306) 00:32:03.838 2990.617 - 3004.926: 85.5005% ( 281) 00:32:03.838 3004.926 - 3019.235: 86.0056% ( 267) 00:32:03.838 3019.235 - 3033.544: 86.4955% ( 259) 00:32:03.838 3033.544 - 3047.853: 86.9514% ( 241) 00:32:03.838 3047.853 - 3062.162: 87.3789% ( 226) 00:32:03.838 3062.162 - 3076.472: 87.7686% ( 206) 00:32:03.838 3076.472 - 3090.781: 88.1564% ( 205) 00:32:03.838 3090.781 - 3105.090: 88.5158% ( 190) 00:32:03.838 3105.090 - 3119.399: 88.8752% ( 190) 00:32:03.838 3119.399 - 3133.708: 89.1930% ( 168) 00:32:03.838 3133.708 - 3148.017: 89.4843% ( 154) 00:32:03.838 3148.017 - 3162.327: 89.7624% ( 147) 00:32:03.838 3162.327 - 3176.636: 90.0367% ( 145) 00:32:03.838 3176.636 - 3190.945: 90.2864% ( 132) 00:32:03.838 3190.945 - 3205.254: 90.5304% ( 129) 00:32:03.838 3205.254 - 3219.563: 90.7744% ( 129) 00:32:03.838 3219.563 - 3233.872: 91.0128% ( 126) 00:32:03.838 3233.872 - 3248.182: 91.2398% ( 120) 00:32:03.838 3248.182 - 3262.491: 91.4441% ( 108) 00:32:03.838 3262.491 - 3276.800: 91.6389% ( 103) 00:32:03.838 3276.800 - 3291.109: 91.8357% ( 104) 00:32:03.838 3291.109 - 3305.418: 92.0324% ( 104) 00:32:03.838 3305.418 - 3319.728: 92.2197% ( 99) 00:32:03.838 3319.728 - 3334.037: 92.4126% ( 102) 00:32:03.838 3334.037 - 3348.346: 92.5866% ( 92) 00:32:03.838 3348.346 - 3362.655: 92.7663% ( 95) 00:32:03.838 3362.655 - 3376.964: 92.9385% ( 91) 00:32:03.838 3376.964 - 3391.273: 93.1068% ( 89) 00:32:03.838 3391.273 - 3405.583: 93.2714% ( 87) 00:32:03.838 3405.583 - 3419.892: 93.4379% ( 88) 00:32:03.838 3419.892 - 3434.201: 93.6006% ( 86) 00:32:03.838 3434.201 - 3448.510: 93.7538% ( 81) 00:32:03.838 3448.510 - 3462.819: 93.9146% ( 85) 00:32:03.838 3462.819 - 3477.128: 94.0678% ( 81) 00:32:03.838 3477.128 - 3491.438: 94.2191% ( 80) 00:32:03.838 3491.438 - 3505.747: 94.3515% ( 70) 00:32:03.838 3505.747 - 3520.056: 94.4972% ( 77) 00:32:03.838 3520.056 - 3534.365: 94.6372% ( 74) 00:32:03.838 3534.365 - 3548.674: 94.7734% ( 72) 00:32:03.839 3548.674 - 3562.983: 94.8926% ( 63) 00:32:03.839 3562.983 - 3577.293: 95.0155% ( 65) 00:32:03.839 3577.293 - 3591.602: 95.1347% ( 63) 00:32:03.839 3591.602 - 3605.911: 95.2425% ( 57) 00:32:03.839 3605.911 - 3620.220: 95.3560% ( 60) 00:32:03.839 3620.220 - 3634.529: 95.4563% ( 53) 00:32:03.839 3634.529 - 3648.838: 95.5622% ( 56) 00:32:03.839 3648.838 - 3663.148: 95.6719% ( 58) 00:32:03.839 3663.148 - 3691.766: 95.8743% ( 107) 00:32:03.839 3691.766 - 3720.384: 96.0786% ( 108) 00:32:03.839 3720.384 - 3749.003: 96.2735% ( 103) 00:32:03.839 3749.003 - 3777.621: 96.4588% ( 98) 00:32:03.839 3777.621 - 3806.239: 96.6404% ( 96) 00:32:03.839 3806.239 - 3834.858: 96.8145% ( 92) 00:32:03.839 3834.858 - 3863.476: 96.9771% ( 86) 00:32:03.839 3863.476 - 3892.094: 97.1436% ( 88) 00:32:03.839 3892.094 - 3920.713: 97.2968% ( 81) 00:32:03.839 3920.713 - 3949.331: 97.4482% ( 80) 00:32:03.839 3949.331 - 3977.949: 97.5938% ( 77) 00:32:03.839 3977.949 - 4006.568: 97.7016% ( 57) 00:32:03.839 4006.568 - 4035.186: 97.8000% ( 52) 00:32:03.839 4035.186 - 4063.804: 97.8965% ( 51) 00:32:03.839 4063.804 - 4092.423: 97.9854% ( 47) 00:32:03.839 4092.423 - 4121.041: 98.0667% ( 43) 00:32:03.839 4121.041 - 4149.659: 98.1462% ( 42) 00:32:03.839 4149.659 - 4178.278: 98.2162% ( 37) 00:32:03.839 4178.278 - 4206.896: 98.2654% ( 26) 00:32:03.839 4206.896 - 4235.514: 98.3183% ( 28) 00:32:03.839 4235.514 - 4264.133: 98.3637% ( 24) 00:32:03.839 4264.133 - 4292.751: 98.4167% ( 28) 00:32:03.839 4292.751 - 4321.369: 98.4545% ( 20) 00:32:03.839 4321.369 - 4349.988: 98.4961% ( 22) 00:32:03.839 4349.988 - 4378.606: 98.5378% ( 22) 00:32:03.839 4378.606 - 4407.224: 98.5756% ( 20) 00:32:03.839 4407.224 - 4435.843: 98.6172% ( 22) 00:32:03.839 4435.843 - 4464.461: 98.6550% ( 20) 00:32:03.839 4464.461 - 4493.079: 98.6891% ( 18) 00:32:03.839 4493.079 - 4521.698: 98.7250% ( 19) 00:32:03.839 4521.698 - 4550.316: 98.7629% ( 20) 00:32:03.839 4550.316 - 4578.934: 98.7969% ( 18) 00:32:03.839 4578.934 - 4607.553: 98.8329% ( 19) 00:32:03.839 4607.553 - 4636.171: 98.8726% ( 21) 00:32:03.839 4636.171 - 4664.790: 98.9028% ( 16) 00:32:03.839 4664.790 - 4693.408: 98.9426% ( 21) 00:32:03.839 4693.408 - 4722.026: 98.9804% ( 20) 00:32:03.839 4722.026 - 4750.645: 99.0220% ( 22) 00:32:03.839 4750.645 - 4779.263: 99.0485% ( 14) 00:32:03.839 4779.263 - 4807.881: 99.0769% ( 15) 00:32:03.839 4807.881 - 4836.500: 99.1071% ( 16) 00:32:03.839 4836.500 - 4865.118: 99.1355% ( 15) 00:32:03.839 4865.118 - 4893.736: 99.1658% ( 16) 00:32:03.839 4893.736 - 4922.355: 99.1904% ( 13) 00:32:03.839 4922.355 - 4950.973: 99.2206% ( 16) 00:32:03.839 4950.973 - 4979.591: 99.2509% ( 16) 00:32:03.839 4979.591 - 5008.210: 99.2736% ( 12) 00:32:03.839 5008.210 - 5036.828: 99.2982% ( 13) 00:32:03.839 5036.828 - 5065.446: 99.3228% ( 13) 00:32:03.839 5065.446 - 5094.065: 99.3493% ( 14) 00:32:03.839 5094.065 - 5122.683: 99.3701% ( 11) 00:32:03.839 5122.683 - 5151.301: 99.3909% ( 11) 00:32:03.839 5151.301 - 5179.920: 99.4098% ( 10) 00:32:03.839 5179.920 - 5208.538: 99.4230% ( 7) 00:32:03.839 5208.538 - 5237.156: 99.4382% ( 8) 00:32:03.839 5237.156 - 5265.775: 99.4533% ( 8) 00:32:03.839 5265.775 - 5294.393: 99.4684% ( 8) 00:32:03.839 5294.393 - 5323.011: 99.4779% ( 5) 00:32:03.839 5323.011 - 5351.630: 99.4911% ( 7) 00:32:03.839 5351.630 - 5380.248: 99.5025% ( 6) 00:32:03.839 5380.248 - 5408.866: 99.5138% ( 6) 00:32:03.839 5408.866 - 5437.485: 99.5271% ( 7) 00:32:03.839 5437.485 - 5466.103: 99.5365% ( 5) 00:32:03.839 5466.103 - 5494.721: 99.5479% ( 6) 00:32:03.839 5494.721 - 5523.340: 99.5592% ( 6) 00:32:03.839 5523.340 - 5551.958: 99.5725% ( 7) 00:32:03.839 5551.958 - 5580.576: 99.5838% ( 6) 00:32:03.839 5580.576 - 5609.195: 99.5933% ( 5) 00:32:03.839 5609.195 - 5637.813: 99.6046% ( 6) 00:32:03.839 5637.813 - 5666.431: 99.6160% ( 6) 00:32:03.839 5666.431 - 5695.050: 99.6273% ( 6) 00:32:03.839 5695.050 - 5723.668: 99.6387% ( 6) 00:32:03.839 5723.668 - 5752.286: 99.6519% ( 7) 00:32:03.839 5752.286 - 5780.905: 99.6633% ( 6) 00:32:03.839 5780.905 - 5809.523: 99.6746% ( 6) 00:32:03.839 5809.523 - 5838.141: 99.6898% ( 8) 00:32:03.839 5838.141 - 5866.760: 99.6992% ( 5) 00:32:03.839 5866.760 - 5895.378: 99.7125% ( 7) 00:32:03.839 5895.378 - 5923.997: 99.7219% ( 5) 00:32:03.839 5923.997 - 5952.615: 99.7352% ( 7) 00:32:03.839 5952.615 - 5981.233: 99.7465% ( 6) 00:32:03.839 5981.233 - 6009.852: 99.7579% ( 6) 00:32:03.839 6009.852 - 6038.470: 99.7692% ( 6) 00:32:03.839 6038.470 - 6067.088: 99.7844% ( 8) 00:32:03.839 6067.088 - 6095.707: 99.7957% ( 6) 00:32:03.839 6095.707 - 6124.325: 99.8033% ( 4) 00:32:03.839 6124.325 - 6152.943: 99.8165% ( 7) 00:32:03.839 6152.943 - 6181.562: 99.8260% ( 5) 00:32:03.839 6181.562 - 6210.180: 99.8316% ( 3) 00:32:03.839 6210.180 - 6238.798: 99.8392% ( 4) 00:32:03.839 6238.798 - 6267.417: 99.8487% ( 5) 00:32:03.839 6267.417 - 6296.035: 99.8562% ( 4) 00:32:03.839 6296.035 - 6324.653: 99.8600% ( 2) 00:32:03.839 6324.653 - 6353.272: 99.8676% ( 4) 00:32:03.839 6353.272 - 6381.890: 99.8733% ( 3) 00:32:03.839 6381.890 - 6410.508: 99.8789% ( 3) 00:32:03.839 6410.508 - 6439.127: 99.8846% ( 3) 00:32:03.839 6439.127 - 6467.745: 99.8903% ( 3) 00:32:03.839 6467.745 - 6496.363: 99.8979% ( 4) 00:32:03.839 6496.363 - 6524.982: 99.9016% ( 2) 00:32:03.839 6524.982 - 6553.600: 99.9092% ( 4) 00:32:03.839 6553.600 - 6582.218: 99.9149% ( 3) 00:32:03.839 6582.218 - 6610.837: 99.9206% ( 3) 00:32:03.839 6610.837 - 6639.455: 99.9262% ( 3) 00:32:03.839 6639.455 - 6668.073: 99.9319% ( 3) 00:32:03.839 6668.073 - 6696.692: 99.9395% ( 4) 00:32:03.839 6696.692 - 6725.310: 99.9414% ( 1) 00:32:03.839 6725.310 - 6753.928: 99.9489% ( 4) 00:32:03.839 6753.928 - 6782.547: 99.9565% ( 4) 00:32:03.839 6782.547 - 6811.165: 99.9603% ( 2) 00:32:03.839 6811.165 - 6839.783: 99.9660% ( 3) 00:32:03.839 6839.783 - 6868.402: 99.9735% ( 4) 00:32:03.839 6868.402 - 6897.020: 99.9811% ( 4) 00:32:03.839 6897.020 - 6925.638: 99.9849% ( 2) 00:32:03.839 6925.638 - 6954.257: 99.9887% ( 2) 00:32:03.839 6954.257 - 6982.875: 99.9943% ( 3) 00:32:03.839 6982.875 - 7011.493: 99.9962% ( 1) 00:32:03.839 7011.493 - 7040.112: 99.9981% ( 1) 00:32:03.839 7125.967 - 7154.585: 100.0000% ( 1) 00:32:03.839 00:32:03.839 13:55:42 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:32:05.214 Initializing NVMe Controllers 00:32:05.214 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:05.214 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:05.214 Initialization complete. Launching workers. 00:32:05.214 ======================================================== 00:32:05.214 Latency(us) 00:32:05.214 Device Information : IOPS MiB/s Average min max 00:32:05.214 PCIE (0000:00:06.0) NSID 1 from core 0: 39169.00 459.01 3275.67 1034.04 6553.59 00:32:05.214 ======================================================== 00:32:05.214 Total : 39169.00 459.01 3275.67 1034.04 6553.59 00:32:05.214 00:32:05.214 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:05.214 ================================================================================= 00:32:05.214 1.00000% : 2303.776us 00:32:05.214 10.00000% : 2561.341us 00:32:05.214 25.00000% : 2804.597us 00:32:05.214 50.00000% : 3248.182us 00:32:05.214 75.00000% : 3691.766us 00:32:05.214 90.00000% : 4063.804us 00:32:05.214 95.00000% : 4292.751us 00:32:05.214 98.00000% : 4578.934us 00:32:05.214 99.00000% : 4807.881us 00:32:05.214 99.50000% : 4950.973us 00:32:05.214 99.90000% : 5666.431us 00:32:05.214 99.99000% : 6467.745us 00:32:05.214 99.99900% : 6553.600us 00:32:05.214 99.99990% : 6553.600us 00:32:05.214 99.99999% : 6553.600us 00:32:05.214 00:32:05.214 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:32:05.214 ============================================================================== 00:32:05.214 Range in us Cumulative IO count 00:32:05.214 1030.260 - 1037.415: 0.0026% ( 1) 00:32:05.214 1817.265 - 1824.419: 0.0051% ( 1) 00:32:05.214 1860.192 - 1874.501: 0.0077% ( 1) 00:32:05.214 1917.429 - 1931.738: 0.0102% ( 1) 00:32:05.214 1931.738 - 1946.047: 0.0128% ( 1) 00:32:05.214 1960.356 - 1974.666: 0.0179% ( 2) 00:32:05.214 1974.666 - 1988.975: 0.0230% ( 2) 00:32:05.214 1988.975 - 2003.284: 0.0332% ( 4) 00:32:05.214 2003.284 - 2017.593: 0.0383% ( 2) 00:32:05.214 2017.593 - 2031.902: 0.0434% ( 2) 00:32:05.214 2031.902 - 2046.211: 0.0511% ( 3) 00:32:05.214 2046.211 - 2060.521: 0.0664% ( 6) 00:32:05.214 2060.521 - 2074.830: 0.0868% ( 8) 00:32:05.214 2074.830 - 2089.139: 0.1047% ( 7) 00:32:05.214 2089.139 - 2103.448: 0.1123% ( 3) 00:32:05.214 2103.448 - 2117.757: 0.1251% ( 5) 00:32:05.214 2117.757 - 2132.066: 0.1455% ( 8) 00:32:05.214 2132.066 - 2146.376: 0.1787% ( 13) 00:32:05.214 2146.376 - 2160.685: 0.2042% ( 10) 00:32:05.214 2160.685 - 2174.994: 0.2400% ( 14) 00:32:05.214 2174.994 - 2189.303: 0.2681% ( 11) 00:32:05.214 2189.303 - 2203.612: 0.3344% ( 26) 00:32:05.214 2203.612 - 2217.921: 0.4034% ( 27) 00:32:05.214 2217.921 - 2232.231: 0.4774% ( 29) 00:32:05.214 2232.231 - 2246.540: 0.5642% ( 34) 00:32:05.214 2246.540 - 2260.849: 0.6766% ( 44) 00:32:05.214 2260.849 - 2275.158: 0.8170% ( 55) 00:32:05.214 2275.158 - 2289.467: 0.9702% ( 60) 00:32:05.214 2289.467 - 2303.776: 1.1642% ( 76) 00:32:05.214 2303.776 - 2318.086: 1.3889% ( 88) 00:32:05.214 2318.086 - 2332.395: 1.6391% ( 98) 00:32:05.214 2332.395 - 2346.704: 1.9148% ( 108) 00:32:05.214 2346.704 - 2361.013: 2.2518% ( 132) 00:32:05.214 2361.013 - 2375.322: 2.5964% ( 135) 00:32:05.214 2375.322 - 2389.631: 2.9437% ( 136) 00:32:05.214 2389.631 - 2403.941: 3.3675% ( 166) 00:32:05.214 2403.941 - 2418.250: 3.8321% ( 182) 00:32:05.214 2418.250 - 2432.559: 4.3172% ( 190) 00:32:05.214 2432.559 - 2446.868: 4.8150% ( 195) 00:32:05.214 2446.868 - 2461.177: 5.3512% ( 210) 00:32:05.214 2461.177 - 2475.486: 5.9282% ( 226) 00:32:05.214 2475.486 - 2489.796: 6.5639% ( 249) 00:32:05.214 2489.796 - 2504.105: 7.2532% ( 270) 00:32:05.214 2504.105 - 2518.414: 7.9476% ( 272) 00:32:05.214 2518.414 - 2532.723: 8.7033% ( 296) 00:32:05.214 2532.723 - 2547.032: 9.4948% ( 310) 00:32:05.214 2547.032 - 2561.341: 10.2964% ( 314) 00:32:05.214 2561.341 - 2575.651: 11.1185% ( 322) 00:32:05.214 2575.651 - 2589.960: 11.8870% ( 301) 00:32:05.214 2589.960 - 2604.269: 12.7371% ( 333) 00:32:05.214 2604.269 - 2618.578: 13.6205% ( 346) 00:32:05.214 2618.578 - 2632.887: 14.5013% ( 345) 00:32:05.214 2632.887 - 2647.197: 15.3795% ( 344) 00:32:05.214 2647.197 - 2661.506: 16.3088% ( 364) 00:32:05.214 2661.506 - 2675.815: 17.2100% ( 353) 00:32:05.214 2675.815 - 2690.124: 18.1266% ( 359) 00:32:05.214 2690.124 - 2704.433: 19.0074% ( 345) 00:32:05.214 2704.433 - 2718.742: 19.9546% ( 371) 00:32:05.214 2718.742 - 2733.052: 20.9451% ( 388) 00:32:05.214 2733.052 - 2747.361: 21.8361% ( 349) 00:32:05.214 2747.361 - 2761.670: 22.7808% ( 370) 00:32:05.214 2761.670 - 2775.979: 23.6897% ( 356) 00:32:05.214 2775.979 - 2790.288: 24.5883% ( 352) 00:32:05.214 2790.288 - 2804.597: 25.4844% ( 351) 00:32:05.214 2804.597 - 2818.907: 26.4010% ( 359) 00:32:05.214 2818.907 - 2833.216: 27.2588% ( 336) 00:32:05.214 2833.216 - 2847.525: 28.1498% ( 349) 00:32:05.214 2847.525 - 2861.834: 28.9923% ( 330) 00:32:05.214 2861.834 - 2876.143: 29.8859% ( 350) 00:32:05.214 2876.143 - 2890.452: 30.7156% ( 325) 00:32:05.214 2890.452 - 2904.762: 31.5632% ( 332) 00:32:05.215 2904.762 - 2919.071: 32.3776% ( 319) 00:32:05.215 2919.071 - 2933.380: 33.2048% ( 324) 00:32:05.215 2933.380 - 2947.689: 34.0295% ( 323) 00:32:05.215 2947.689 - 2961.998: 34.8286% ( 313) 00:32:05.215 2961.998 - 2976.307: 35.6557% ( 324) 00:32:05.215 2976.307 - 2990.617: 36.4600% ( 315) 00:32:05.215 2990.617 - 3004.926: 37.2642% ( 315) 00:32:05.215 3004.926 - 3019.235: 38.0658% ( 314) 00:32:05.215 3019.235 - 3033.544: 38.8292% ( 299) 00:32:05.215 3033.544 - 3047.853: 39.5976% ( 301) 00:32:05.215 3047.853 - 3062.162: 40.3840% ( 308) 00:32:05.215 3062.162 - 3076.472: 41.1371% ( 295) 00:32:05.215 3076.472 - 3090.781: 41.8852% ( 293) 00:32:05.215 3090.781 - 3105.090: 42.6562% ( 302) 00:32:05.215 3105.090 - 3119.399: 43.3940% ( 289) 00:32:05.215 3119.399 - 3133.708: 44.1931% ( 313) 00:32:05.215 3133.708 - 3148.017: 44.9131% ( 282) 00:32:05.215 3148.017 - 3162.327: 45.6994% ( 308) 00:32:05.215 3162.327 - 3176.636: 46.5291% ( 325) 00:32:05.215 3176.636 - 3190.945: 47.2951% ( 300) 00:32:05.215 3190.945 - 3205.254: 48.0967% ( 314) 00:32:05.215 3205.254 - 3219.563: 48.9188% ( 322) 00:32:05.215 3219.563 - 3233.872: 49.6898% ( 302) 00:32:05.215 3233.872 - 3248.182: 50.4583% ( 301) 00:32:05.215 3248.182 - 3262.491: 51.2267% ( 301) 00:32:05.215 3262.491 - 3276.800: 52.0233% ( 312) 00:32:05.215 3276.800 - 3291.109: 52.8224% ( 313) 00:32:05.215 3291.109 - 3305.418: 53.6521% ( 325) 00:32:05.215 3305.418 - 3319.728: 54.4614% ( 317) 00:32:05.215 3319.728 - 3334.037: 55.2733% ( 318) 00:32:05.215 3334.037 - 3348.346: 56.0801% ( 316) 00:32:05.215 3348.346 - 3362.655: 56.8741% ( 311) 00:32:05.215 3362.655 - 3376.964: 57.6910% ( 320) 00:32:05.215 3376.964 - 3391.273: 58.5284% ( 328) 00:32:05.215 3391.273 - 3405.583: 59.3556% ( 324) 00:32:05.215 3405.583 - 3419.892: 60.1777% ( 322) 00:32:05.215 3419.892 - 3434.201: 61.0202% ( 330) 00:32:05.215 3434.201 - 3448.510: 61.9214% ( 353) 00:32:05.215 3448.510 - 3462.819: 62.8073% ( 347) 00:32:05.215 3462.819 - 3477.128: 63.6115% ( 315) 00:32:05.215 3477.128 - 3491.438: 64.4719% ( 337) 00:32:05.215 3491.438 - 3505.747: 65.3221% ( 333) 00:32:05.215 3505.747 - 3520.056: 66.1493% ( 324) 00:32:05.215 3520.056 - 3534.365: 66.9611% ( 318) 00:32:05.215 3534.365 - 3548.674: 67.7653% ( 315) 00:32:05.215 3548.674 - 3562.983: 68.5900% ( 323) 00:32:05.215 3562.983 - 3577.293: 69.4529% ( 338) 00:32:05.215 3577.293 - 3591.602: 70.2316% ( 305) 00:32:05.215 3591.602 - 3605.911: 71.0511% ( 321) 00:32:05.215 3605.911 - 3620.220: 71.8961% ( 331) 00:32:05.215 3620.220 - 3634.529: 72.7233% ( 324) 00:32:05.215 3634.529 - 3648.838: 73.4867% ( 299) 00:32:05.215 3648.838 - 3663.148: 74.2832% ( 312) 00:32:05.215 3663.148 - 3691.766: 75.8202% ( 602) 00:32:05.215 3691.766 - 3720.384: 77.2728% ( 569) 00:32:05.215 3720.384 - 3749.003: 78.6974% ( 558) 00:32:05.215 3749.003 - 3777.621: 80.0914% ( 546) 00:32:05.215 3777.621 - 3806.239: 81.4496% ( 532) 00:32:05.215 3806.239 - 3834.858: 82.7261% ( 500) 00:32:05.215 3834.858 - 3863.476: 83.9797% ( 491) 00:32:05.215 3863.476 - 3892.094: 85.1541% ( 460) 00:32:05.215 3892.094 - 3920.713: 86.2595% ( 433) 00:32:05.215 3920.713 - 3949.331: 87.2884% ( 403) 00:32:05.215 3949.331 - 3977.949: 88.2662% ( 383) 00:32:05.215 3977.949 - 4006.568: 89.1623% ( 351) 00:32:05.215 4006.568 - 4035.186: 89.9895% ( 324) 00:32:05.215 4035.186 - 4063.804: 90.7350% ( 292) 00:32:05.215 4063.804 - 4092.423: 91.4192% ( 268) 00:32:05.215 4092.423 - 4121.041: 92.0498% ( 247) 00:32:05.215 4121.041 - 4149.659: 92.6319% ( 228) 00:32:05.215 4149.659 - 4178.278: 93.1757% ( 213) 00:32:05.215 4178.278 - 4206.896: 93.7068% ( 208) 00:32:05.215 4206.896 - 4235.514: 94.1867% ( 188) 00:32:05.215 4235.514 - 4264.133: 94.6335% ( 175) 00:32:05.215 4264.133 - 4292.751: 95.0905% ( 179) 00:32:05.215 4292.751 - 4321.369: 95.4786% ( 152) 00:32:05.215 4321.369 - 4349.988: 95.8488% ( 145) 00:32:05.215 4349.988 - 4378.606: 96.2087% ( 141) 00:32:05.215 4378.606 - 4407.224: 96.5508% ( 134) 00:32:05.215 4407.224 - 4435.843: 96.8623% ( 122) 00:32:05.215 4435.843 - 4464.461: 97.1406% ( 109) 00:32:05.215 4464.461 - 4493.079: 97.3882% ( 97) 00:32:05.215 4493.079 - 4521.698: 97.6257% ( 93) 00:32:05.215 4521.698 - 4550.316: 97.8401% ( 84) 00:32:05.215 4550.316 - 4578.934: 98.0342% ( 76) 00:32:05.215 4578.934 - 4607.553: 98.2001% ( 65) 00:32:05.215 4607.553 - 4636.171: 98.3533% ( 60) 00:32:05.215 4636.171 - 4664.790: 98.4886% ( 53) 00:32:05.215 4664.790 - 4693.408: 98.6265% ( 54) 00:32:05.215 4693.408 - 4722.026: 98.7567% ( 51) 00:32:05.215 4722.026 - 4750.645: 98.8792% ( 48) 00:32:05.215 4750.645 - 4779.263: 98.9813% ( 40) 00:32:05.215 4779.263 - 4807.881: 99.0911% ( 43) 00:32:05.215 4807.881 - 4836.500: 99.1907% ( 39) 00:32:05.215 4836.500 - 4865.118: 99.2877% ( 38) 00:32:05.215 4865.118 - 4893.736: 99.3796% ( 36) 00:32:05.215 4893.736 - 4922.355: 99.4639% ( 33) 00:32:05.215 4922.355 - 4950.973: 99.5302% ( 26) 00:32:05.215 4950.973 - 4979.591: 99.5890% ( 23) 00:32:05.215 4979.591 - 5008.210: 99.6400% ( 20) 00:32:05.215 5008.210 - 5036.828: 99.6681% ( 11) 00:32:05.215 5036.828 - 5065.446: 99.6885% ( 8) 00:32:05.215 5065.446 - 5094.065: 99.7090% ( 8) 00:32:05.215 5094.065 - 5122.683: 99.7217% ( 5) 00:32:05.215 5122.683 - 5151.301: 99.7396% ( 7) 00:32:05.215 5151.301 - 5179.920: 99.7498% ( 4) 00:32:05.215 5179.920 - 5208.538: 99.7600% ( 4) 00:32:05.215 5208.538 - 5237.156: 99.7702% ( 4) 00:32:05.215 5237.156 - 5265.775: 99.7855% ( 6) 00:32:05.215 5265.775 - 5294.393: 99.7958% ( 4) 00:32:05.215 5294.393 - 5323.011: 99.8060% ( 4) 00:32:05.215 5323.011 - 5351.630: 99.8187% ( 5) 00:32:05.215 5351.630 - 5380.248: 99.8341% ( 6) 00:32:05.215 5380.248 - 5408.866: 99.8443% ( 4) 00:32:05.215 5408.866 - 5437.485: 99.8545% ( 4) 00:32:05.215 5437.485 - 5466.103: 99.8596% ( 2) 00:32:05.215 5466.103 - 5494.721: 99.8647% ( 2) 00:32:05.215 5494.721 - 5523.340: 99.8698% ( 2) 00:32:05.215 5523.340 - 5551.958: 99.8749% ( 2) 00:32:05.215 5551.958 - 5580.576: 99.8800% ( 2) 00:32:05.215 5580.576 - 5609.195: 99.8877% ( 3) 00:32:05.215 5609.195 - 5637.813: 99.8928% ( 2) 00:32:05.215 5637.813 - 5666.431: 99.9004% ( 3) 00:32:05.215 5666.431 - 5695.050: 99.9055% ( 2) 00:32:05.215 5695.050 - 5723.668: 99.9106% ( 2) 00:32:05.215 5723.668 - 5752.286: 99.9183% ( 3) 00:32:05.215 5752.286 - 5780.905: 99.9234% ( 2) 00:32:05.215 5780.905 - 5809.523: 99.9311% ( 3) 00:32:05.215 5809.523 - 5838.141: 99.9336% ( 1) 00:32:05.215 5838.141 - 5866.760: 99.9362% ( 1) 00:32:05.215 5866.760 - 5895.378: 99.9387% ( 1) 00:32:05.215 5895.378 - 5923.997: 99.9413% ( 1) 00:32:05.215 5923.997 - 5952.615: 99.9464% ( 2) 00:32:05.215 5952.615 - 5981.233: 99.9489% ( 1) 00:32:05.215 5981.233 - 6009.852: 99.9515% ( 1) 00:32:05.215 6009.852 - 6038.470: 99.9540% ( 1) 00:32:05.215 6038.470 - 6067.088: 99.9566% ( 1) 00:32:05.215 6067.088 - 6095.707: 99.9592% ( 1) 00:32:05.215 6095.707 - 6124.325: 99.9617% ( 1) 00:32:05.215 6124.325 - 6152.943: 99.9643% ( 1) 00:32:05.215 6152.943 - 6181.562: 99.9668% ( 1) 00:32:05.215 6181.562 - 6210.180: 99.9694% ( 1) 00:32:05.215 6210.180 - 6238.798: 99.9719% ( 1) 00:32:05.215 6238.798 - 6267.417: 99.9745% ( 1) 00:32:05.215 6267.417 - 6296.035: 99.9770% ( 1) 00:32:05.215 6324.653 - 6353.272: 99.9821% ( 2) 00:32:05.215 6353.272 - 6381.890: 99.9847% ( 1) 00:32:05.215 6381.890 - 6410.508: 99.9872% ( 1) 00:32:05.215 6410.508 - 6439.127: 99.9898% ( 1) 00:32:05.215 6439.127 - 6467.745: 99.9923% ( 1) 00:32:05.215 6467.745 - 6496.363: 99.9949% ( 1) 00:32:05.215 6496.363 - 6524.982: 99.9974% ( 1) 00:32:05.215 6524.982 - 6553.600: 100.0000% ( 1) 00:32:05.215 00:32:05.215 ************************************ 00:32:05.215 END TEST nvme_perf 00:32:05.215 ************************************ 00:32:05.215 13:55:44 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:32:05.215 00:32:05.215 real 0m2.657s 00:32:05.215 user 0m2.209s 00:32:05.215 sys 0m0.289s 00:32:05.215 13:55:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.215 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:32:05.215 13:55:44 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:05.215 13:55:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:32:05.215 13:55:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.215 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:32:05.215 ************************************ 00:32:05.215 START TEST nvme_hello_world 00:32:05.215 ************************************ 00:32:05.215 13:55:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:05.215 Initializing NVMe Controllers 00:32:05.215 Attached to 0000:00:06.0 00:32:05.215 Namespace ID: 1 size: 5GB 00:32:05.215 Initialization complete. 00:32:05.215 INFO: using host memory buffer for IO 00:32:05.215 Hello world! 00:32:05.215 ************************************ 00:32:05.215 END TEST nvme_hello_world 00:32:05.215 ************************************ 00:32:05.215 00:32:05.215 real 0m0.291s 00:32:05.215 user 0m0.105s 00:32:05.215 sys 0m0.121s 00:32:05.215 13:55:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.215 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:32:05.475 13:55:44 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:05.475 13:55:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.475 13:55:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.475 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:32:05.475 ************************************ 00:32:05.475 START TEST nvme_sgl 00:32:05.475 ************************************ 00:32:05.475 13:55:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:05.733 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:32:05.733 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:32:05.733 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:32:05.733 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:32:05.733 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:32:05.733 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:32:05.733 NVMe Readv/Writev Request test 00:32:05.733 Attached to 0000:00:06.0 00:32:05.733 0000:00:06.0: build_io_request_2 test passed 00:32:05.733 0000:00:06.0: build_io_request_4 test passed 00:32:05.733 0000:00:06.0: build_io_request_5 test passed 00:32:05.733 0000:00:06.0: build_io_request_6 test passed 00:32:05.733 0000:00:06.0: build_io_request_7 test passed 00:32:05.733 0000:00:06.0: build_io_request_10 test passed 00:32:05.733 Cleaning up... 00:32:05.733 ************************************ 00:32:05.733 END TEST nvme_sgl 00:32:05.733 ************************************ 00:32:05.733 00:32:05.733 real 0m0.426s 00:32:05.733 user 0m0.218s 00:32:05.733 sys 0m0.138s 00:32:05.733 13:55:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.733 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:32:05.733 13:55:45 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:05.733 13:55:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.733 13:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.733 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:32:05.734 ************************************ 00:32:05.734 START TEST nvme_e2edp 00:32:05.734 ************************************ 00:32:05.734 13:55:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:06.299 NVMe Write/Read with End-to-End data protection test 00:32:06.299 Attached to 0000:00:06.0 00:32:06.299 Cleaning up... 00:32:06.299 ************************************ 00:32:06.299 END TEST nvme_e2edp 00:32:06.299 ************************************ 00:32:06.299 00:32:06.299 real 0m0.305s 00:32:06.299 user 0m0.085s 00:32:06.299 sys 0m0.128s 00:32:06.299 13:55:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.299 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:32:06.299 13:55:45 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:06.299 13:55:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:06.299 13:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:06.299 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:32:06.299 ************************************ 00:32:06.299 START TEST nvme_reserve 00:32:06.299 ************************************ 00:32:06.299 13:55:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:06.557 ===================================================== 00:32:06.557 NVMe Controller at PCI bus 0, device 6, function 0 00:32:06.557 ===================================================== 00:32:06.557 Reservations: Not Supported 00:32:06.557 Reservation test passed 00:32:06.557 00:32:06.557 real 0m0.347s 00:32:06.557 user 0m0.120s 00:32:06.557 sys 0m0.148s 00:32:06.557 13:55:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.557 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:32:06.557 ************************************ 00:32:06.557 END TEST nvme_reserve 00:32:06.557 ************************************ 00:32:06.557 13:55:45 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:06.557 13:55:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:06.557 13:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:06.557 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:32:06.557 ************************************ 00:32:06.557 START TEST nvme_err_injection 00:32:06.557 ************************************ 00:32:06.557 13:55:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:06.816 NVMe Error Injection test 00:32:06.816 Attached to 0000:00:06.0 00:32:06.816 0000:00:06.0: get features failed as expected 00:32:06.816 0000:00:06.0: get features successfully as expected 00:32:06.816 0000:00:06.0: read failed as expected 00:32:06.816 0000:00:06.0: read successfully as expected 00:32:06.816 Cleaning up... 00:32:07.076 ************************************ 00:32:07.076 END TEST nvme_err_injection 00:32:07.076 ************************************ 00:32:07.076 00:32:07.076 real 0m0.356s 00:32:07.076 user 0m0.125s 00:32:07.076 sys 0m0.151s 00:32:07.076 13:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.076 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:32:07.076 13:55:46 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:07.076 13:55:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:32:07.076 13:55:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.076 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:32:07.076 ************************************ 00:32:07.076 START TEST nvme_overhead 00:32:07.076 ************************************ 00:32:07.076 13:55:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:08.478 Initializing NVMe Controllers 00:32:08.478 Attached to 0000:00:06.0 00:32:08.478 Initialization complete. Launching workers. 00:32:08.478 submit (in ns) avg, min, max = 12987.0, 10062.0, 53480.3 00:32:08.478 complete (in ns) avg, min, max = 7716.2, 5837.6, 61868.1 00:32:08.478 00:32:08.478 Submit histogram 00:32:08.478 ================ 00:32:08.478 Range in us Cumulative Count 00:32:08.478 10.061 - 10.117: 0.0085% ( 1) 00:32:08.478 10.117 - 10.173: 0.0255% ( 2) 00:32:08.478 10.229 - 10.285: 0.0510% ( 3) 00:32:08.478 10.341 - 10.397: 0.0850% ( 4) 00:32:08.478 10.452 - 10.508: 0.1105% ( 3) 00:32:08.478 10.508 - 10.564: 0.1444% ( 4) 00:32:08.478 10.564 - 10.620: 0.2039% ( 7) 00:32:08.478 10.620 - 10.676: 0.2804% ( 9) 00:32:08.478 10.676 - 10.732: 0.4503% ( 20) 00:32:08.478 10.732 - 10.788: 0.6118% ( 19) 00:32:08.478 10.788 - 10.844: 0.7732% ( 19) 00:32:08.478 10.844 - 10.900: 0.9941% ( 26) 00:32:08.478 10.900 - 10.955: 1.3850% ( 46) 00:32:08.478 10.955 - 11.011: 1.9798% ( 70) 00:32:08.478 11.011 - 11.067: 2.6001% ( 73) 00:32:08.478 11.067 - 11.123: 3.4752% ( 103) 00:32:08.478 11.123 - 11.179: 4.5119% ( 122) 00:32:08.478 11.179 - 11.235: 5.9138% ( 165) 00:32:08.478 11.235 - 11.291: 7.3413% ( 168) 00:32:08.478 11.291 - 11.347: 9.5420% ( 259) 00:32:08.478 11.347 - 11.403: 11.9127% ( 279) 00:32:08.478 11.403 - 11.459: 14.7676% ( 336) 00:32:08.478 11.459 - 11.514: 17.5376% ( 326) 00:32:08.478 11.514 - 11.570: 20.9618% ( 403) 00:32:08.478 11.570 - 11.626: 24.1057% ( 370) 00:32:08.478 11.626 - 11.682: 27.2920% ( 375) 00:32:08.478 11.682 - 11.738: 30.6313% ( 393) 00:32:08.478 11.738 - 11.794: 34.0896% ( 407) 00:32:08.478 11.794 - 11.850: 37.2929% ( 377) 00:32:08.478 11.850 - 11.906: 40.3433% ( 359) 00:32:08.478 11.906 - 11.962: 43.6231% ( 386) 00:32:08.478 11.962 - 12.017: 47.0473% ( 403) 00:32:08.478 12.017 - 12.073: 50.2252% ( 374) 00:32:08.478 12.073 - 12.129: 53.2756% ( 359) 00:32:08.478 12.129 - 12.185: 56.3854% ( 366) 00:32:08.478 12.185 - 12.241: 59.1979% ( 331) 00:32:08.478 12.241 - 12.297: 61.7894% ( 305) 00:32:08.478 12.297 - 12.353: 64.3045% ( 296) 00:32:08.478 12.353 - 12.409: 66.3013% ( 235) 00:32:08.478 12.409 - 12.465: 68.2301% ( 227) 00:32:08.478 12.465 - 12.521: 69.8445% ( 190) 00:32:08.478 12.521 - 12.576: 71.5779% ( 204) 00:32:08.478 12.576 - 12.632: 73.1498% ( 185) 00:32:08.478 12.632 - 12.688: 74.4158% ( 149) 00:32:08.478 12.688 - 12.744: 75.5969% ( 139) 00:32:08.478 12.744 - 12.800: 76.7015% ( 130) 00:32:08.478 12.800 - 12.856: 77.6617% ( 113) 00:32:08.478 12.856 - 12.912: 78.6388% ( 115) 00:32:08.478 12.912 - 12.968: 79.5225% ( 104) 00:32:08.478 12.968 - 13.024: 80.4316% ( 107) 00:32:08.478 13.024 - 13.079: 81.2388% ( 95) 00:32:08.478 13.079 - 13.135: 82.0800% ( 99) 00:32:08.478 13.135 - 13.191: 82.6663% ( 69) 00:32:08.478 13.191 - 13.247: 83.3631% ( 82) 00:32:08.478 13.247 - 13.303: 83.8984% ( 63) 00:32:08.478 13.303 - 13.359: 84.3827% ( 57) 00:32:08.478 13.359 - 13.415: 84.8670% ( 57) 00:32:08.478 13.415 - 13.471: 85.2749% ( 48) 00:32:08.478 13.471 - 13.527: 85.7167% ( 52) 00:32:08.478 13.527 - 13.583: 86.0651% ( 41) 00:32:08.478 13.583 - 13.638: 86.2095% ( 17) 00:32:08.478 13.638 - 13.694: 86.4729% ( 31) 00:32:08.478 13.694 - 13.750: 86.6089% ( 16) 00:32:08.478 13.750 - 13.806: 86.7533% ( 17) 00:32:08.478 13.806 - 13.862: 86.8978% ( 17) 00:32:08.478 13.862 - 13.918: 87.0422% ( 17) 00:32:08.478 13.918 - 13.974: 87.2037% ( 19) 00:32:08.478 13.974 - 14.030: 87.3481% ( 17) 00:32:08.478 14.030 - 14.086: 87.5605% ( 25) 00:32:08.478 14.086 - 14.141: 87.7815% ( 26) 00:32:08.478 14.141 - 14.197: 87.9599% ( 21) 00:32:08.478 14.197 - 14.253: 88.2403% ( 33) 00:32:08.478 14.253 - 14.309: 88.4442% ( 24) 00:32:08.478 14.309 - 14.421: 89.2939% ( 100) 00:32:08.478 14.421 - 14.533: 90.0671% ( 91) 00:32:08.478 14.533 - 14.645: 90.7469% ( 80) 00:32:08.478 14.645 - 14.756: 91.7070% ( 113) 00:32:08.478 14.756 - 14.868: 92.1149% ( 48) 00:32:08.478 14.868 - 14.980: 92.5567% ( 52) 00:32:08.478 14.980 - 15.092: 92.8456% ( 34) 00:32:08.478 15.092 - 15.203: 93.0325% ( 22) 00:32:08.478 15.203 - 15.315: 93.2450% ( 25) 00:32:08.478 15.315 - 15.427: 93.3979% ( 18) 00:32:08.478 15.427 - 15.539: 93.4744% ( 9) 00:32:08.478 15.539 - 15.651: 93.6358% ( 19) 00:32:08.478 15.651 - 15.762: 93.6953% ( 7) 00:32:08.478 15.762 - 15.874: 93.7803% ( 10) 00:32:08.478 15.874 - 15.986: 93.8143% ( 4) 00:32:08.478 15.986 - 16.098: 93.8652% ( 6) 00:32:08.478 16.098 - 16.210: 93.8907% ( 3) 00:32:08.478 16.210 - 16.321: 93.9077% ( 2) 00:32:08.478 16.321 - 16.433: 93.9162% ( 1) 00:32:08.478 16.433 - 16.545: 93.9247% ( 1) 00:32:08.478 16.545 - 16.657: 93.9417% ( 2) 00:32:08.478 16.657 - 16.769: 93.9672% ( 3) 00:32:08.478 16.880 - 16.992: 94.0012% ( 4) 00:32:08.478 16.992 - 17.104: 94.0182% ( 2) 00:32:08.478 17.104 - 17.216: 94.0267% ( 1) 00:32:08.478 17.216 - 17.328: 94.0437% ( 2) 00:32:08.478 17.328 - 17.439: 94.0607% ( 2) 00:32:08.478 17.439 - 17.551: 94.1032% ( 5) 00:32:08.478 17.551 - 17.663: 94.1456% ( 5) 00:32:08.478 17.663 - 17.775: 94.1881% ( 5) 00:32:08.478 17.775 - 17.886: 94.2136% ( 3) 00:32:08.478 17.998 - 18.110: 94.2476% ( 4) 00:32:08.478 18.110 - 18.222: 94.2901% ( 5) 00:32:08.478 18.222 - 18.334: 94.3071% ( 2) 00:32:08.478 18.334 - 18.445: 94.3241% ( 2) 00:32:08.478 18.445 - 18.557: 94.3666% ( 5) 00:32:08.478 18.557 - 18.669: 94.4260% ( 7) 00:32:08.478 18.669 - 18.781: 94.4770% ( 6) 00:32:08.478 18.781 - 18.893: 94.5110% ( 4) 00:32:08.478 19.004 - 19.116: 94.5450% ( 4) 00:32:08.478 19.116 - 19.228: 94.5790% ( 4) 00:32:08.478 19.228 - 19.340: 94.5875% ( 1) 00:32:08.478 19.340 - 19.452: 94.6385% ( 6) 00:32:08.478 19.452 - 19.563: 94.6555% ( 2) 00:32:08.478 19.563 - 19.675: 94.6724% ( 2) 00:32:08.478 19.675 - 19.787: 94.6809% ( 1) 00:32:08.478 19.899 - 20.010: 94.6979% ( 2) 00:32:08.478 20.010 - 20.122: 94.7234% ( 3) 00:32:08.478 20.122 - 20.234: 94.7319% ( 1) 00:32:08.478 20.234 - 20.346: 94.7404% ( 1) 00:32:08.478 20.346 - 20.458: 94.7659% ( 3) 00:32:08.478 20.458 - 20.569: 94.7914% ( 3) 00:32:08.478 20.569 - 20.681: 94.8424% ( 6) 00:32:08.478 20.681 - 20.793: 94.8509% ( 1) 00:32:08.478 20.793 - 20.905: 94.8679% ( 2) 00:32:08.478 20.905 - 21.017: 94.9104% ( 5) 00:32:08.478 21.017 - 21.128: 94.9274% ( 2) 00:32:08.478 21.128 - 21.240: 94.9953% ( 8) 00:32:08.478 21.240 - 21.352: 95.0718% ( 9) 00:32:08.479 21.352 - 21.464: 95.0888% ( 2) 00:32:08.479 21.464 - 21.576: 95.1313% ( 5) 00:32:08.479 21.576 - 21.687: 95.1993% ( 8) 00:32:08.479 21.687 - 21.799: 95.2502% ( 6) 00:32:08.479 21.799 - 21.911: 95.3182% ( 8) 00:32:08.479 21.911 - 22.023: 95.3692% ( 6) 00:32:08.479 22.023 - 22.134: 95.4457% ( 9) 00:32:08.479 22.134 - 22.246: 95.5561% ( 13) 00:32:08.479 22.246 - 22.358: 95.6581% ( 12) 00:32:08.479 22.358 - 22.470: 95.8025% ( 17) 00:32:08.479 22.470 - 22.582: 95.8620% ( 7) 00:32:08.479 22.582 - 22.693: 95.9300% ( 8) 00:32:08.479 22.693 - 22.805: 95.9895% ( 7) 00:32:08.479 22.805 - 22.917: 96.0235% ( 4) 00:32:08.479 22.917 - 23.029: 96.0829% ( 7) 00:32:08.479 23.029 - 23.141: 96.1594% ( 9) 00:32:08.479 23.141 - 23.252: 96.2104% ( 6) 00:32:08.479 23.252 - 23.364: 96.3038% ( 11) 00:32:08.479 23.364 - 23.476: 96.3463% ( 5) 00:32:08.479 23.476 - 23.588: 96.3888% ( 5) 00:32:08.479 23.588 - 23.700: 96.4398% ( 6) 00:32:08.479 23.700 - 23.811: 96.4993% ( 7) 00:32:08.479 23.811 - 23.923: 96.5588% ( 7) 00:32:08.479 23.923 - 24.035: 96.6012% ( 5) 00:32:08.479 24.035 - 24.147: 96.6182% ( 2) 00:32:08.479 24.147 - 24.259: 96.6692% ( 6) 00:32:08.479 24.259 - 24.370: 96.6862% ( 2) 00:32:08.479 24.370 - 24.482: 96.7117% ( 3) 00:32:08.479 24.482 - 24.594: 96.7712% ( 7) 00:32:08.479 24.594 - 24.706: 96.7967% ( 3) 00:32:08.479 24.706 - 24.817: 96.8392% ( 5) 00:32:08.479 24.817 - 24.929: 96.9156% ( 9) 00:32:08.479 24.929 - 25.041: 96.9496% ( 4) 00:32:08.479 25.041 - 25.153: 97.0431% ( 11) 00:32:08.479 25.153 - 25.265: 97.0686% ( 3) 00:32:08.479 25.265 - 25.376: 97.1280% ( 7) 00:32:08.479 25.376 - 25.488: 97.2045% ( 9) 00:32:08.479 25.488 - 25.600: 97.2725% ( 8) 00:32:08.479 25.600 - 25.712: 97.3405% ( 8) 00:32:08.479 25.712 - 25.824: 97.3745% ( 4) 00:32:08.479 25.824 - 25.935: 97.4509% ( 9) 00:32:08.479 25.935 - 26.047: 97.5274% ( 9) 00:32:08.479 26.047 - 26.159: 97.5699% ( 5) 00:32:08.479 26.159 - 26.271: 97.6549% ( 10) 00:32:08.479 26.271 - 26.383: 97.7058% ( 6) 00:32:08.479 26.383 - 26.494: 97.7823% ( 9) 00:32:08.479 26.494 - 26.606: 97.8843% ( 12) 00:32:08.479 26.606 - 26.718: 97.9522% ( 8) 00:32:08.479 26.718 - 26.830: 98.0967% ( 17) 00:32:08.479 26.830 - 26.941: 98.1647% ( 8) 00:32:08.479 26.941 - 27.053: 98.2326% ( 8) 00:32:08.479 27.053 - 27.165: 98.3261% ( 11) 00:32:08.479 27.165 - 27.277: 98.3856% ( 7) 00:32:08.479 27.277 - 27.389: 98.4536% ( 8) 00:32:08.479 27.389 - 27.500: 98.4960% ( 5) 00:32:08.479 27.500 - 27.612: 98.5555% ( 7) 00:32:08.479 27.612 - 27.724: 98.6150% ( 7) 00:32:08.479 27.724 - 27.836: 98.6575% ( 5) 00:32:08.479 27.836 - 27.948: 98.6830% ( 3) 00:32:08.479 27.948 - 28.059: 98.7679% ( 10) 00:32:08.479 28.059 - 28.171: 98.8614% ( 11) 00:32:08.479 28.171 - 28.283: 98.9379% ( 9) 00:32:08.479 28.283 - 28.395: 98.9804% ( 5) 00:32:08.479 28.395 - 28.507: 99.0399% ( 7) 00:32:08.479 28.507 - 28.618: 99.0908% ( 6) 00:32:08.479 28.618 - 28.842: 99.1503% ( 7) 00:32:08.479 28.842 - 29.066: 99.2863% ( 16) 00:32:08.479 29.066 - 29.289: 99.3287% ( 5) 00:32:08.479 29.289 - 29.513: 99.3882% ( 7) 00:32:08.479 29.513 - 29.736: 99.4817% ( 11) 00:32:08.479 29.736 - 29.960: 99.5412% ( 7) 00:32:08.479 29.960 - 30.183: 99.5752% ( 4) 00:32:08.479 30.183 - 30.407: 99.5921% ( 2) 00:32:08.479 30.407 - 30.631: 99.6261% ( 4) 00:32:08.479 30.631 - 30.854: 99.6771% ( 6) 00:32:08.479 30.854 - 31.078: 99.6941% ( 2) 00:32:08.479 31.078 - 31.301: 99.7366% ( 5) 00:32:08.479 31.301 - 31.525: 99.7536% ( 2) 00:32:08.479 31.748 - 31.972: 99.7791% ( 3) 00:32:08.479 31.972 - 32.196: 99.7961% ( 2) 00:32:08.479 32.196 - 32.419: 99.8471% ( 6) 00:32:08.479 32.419 - 32.643: 99.8556% ( 1) 00:32:08.479 32.643 - 32.866: 99.8640% ( 1) 00:32:08.479 33.090 - 33.314: 99.8725% ( 1) 00:32:08.479 33.537 - 33.761: 99.8895% ( 2) 00:32:08.479 34.208 - 34.431: 99.9065% ( 2) 00:32:08.479 34.655 - 34.879: 99.9150% ( 1) 00:32:08.479 35.102 - 35.326: 99.9235% ( 1) 00:32:08.479 35.326 - 35.549: 99.9320% ( 1) 00:32:08.479 35.549 - 35.773: 99.9405% ( 1) 00:32:08.479 35.773 - 35.997: 99.9490% ( 1) 00:32:08.479 36.667 - 36.891: 99.9575% ( 1) 00:32:08.479 36.891 - 37.114: 99.9660% ( 1) 00:32:08.479 38.232 - 38.456: 99.9745% ( 1) 00:32:08.479 40.468 - 40.692: 99.9830% ( 1) 00:32:08.479 44.045 - 44.269: 99.9915% ( 1) 00:32:08.479 53.436 - 53.659: 100.0000% ( 1) 00:32:08.479 00:32:08.479 Complete histogram 00:32:08.479 ================== 00:32:08.479 Range in us Cumulative Count 00:32:08.479 5.813 - 5.841: 0.0085% ( 1) 00:32:08.479 5.841 - 5.869: 0.0255% ( 2) 00:32:08.479 5.897 - 5.925: 0.0425% ( 2) 00:32:08.479 5.925 - 5.953: 0.0595% ( 2) 00:32:08.479 5.953 - 5.981: 0.0850% ( 3) 00:32:08.479 6.037 - 6.065: 0.0935% ( 1) 00:32:08.479 6.065 - 6.093: 0.1105% ( 2) 00:32:08.479 6.093 - 6.121: 0.1360% ( 3) 00:32:08.479 6.121 - 6.148: 0.1529% ( 2) 00:32:08.479 6.148 - 6.176: 0.1954% ( 5) 00:32:08.479 6.176 - 6.204: 0.3059% ( 13) 00:32:08.479 6.204 - 6.232: 0.6288% ( 38) 00:32:08.479 6.232 - 6.260: 1.3170% ( 81) 00:32:08.479 6.260 - 6.288: 2.2432% ( 109) 00:32:08.479 6.288 - 6.316: 3.6112% ( 161) 00:32:08.479 6.316 - 6.344: 5.4210% ( 213) 00:32:08.479 6.344 - 6.372: 7.4178% ( 235) 00:32:08.479 6.372 - 6.400: 9.6100% ( 258) 00:32:08.479 6.400 - 6.428: 11.5303% ( 226) 00:32:08.479 6.428 - 6.456: 13.4336% ( 224) 00:32:08.479 6.456 - 6.484: 15.1925% ( 207) 00:32:08.479 6.484 - 6.512: 16.9173% ( 203) 00:32:08.479 6.512 - 6.540: 18.7866% ( 220) 00:32:08.479 6.540 - 6.568: 20.5285% ( 205) 00:32:08.479 6.568 - 6.596: 22.2619% ( 204) 00:32:08.479 6.596 - 6.624: 24.0207% ( 207) 00:32:08.479 6.624 - 6.652: 25.5077% ( 175) 00:32:08.479 6.652 - 6.679: 26.8417% ( 157) 00:32:08.479 6.679 - 6.707: 27.8528% ( 119) 00:32:08.479 6.707 - 6.735: 28.9404% ( 128) 00:32:08.479 6.735 - 6.763: 29.8666% ( 109) 00:32:08.479 6.763 - 6.791: 30.9032% ( 122) 00:32:08.479 6.791 - 6.819: 31.7869% ( 104) 00:32:08.479 6.819 - 6.847: 32.6536% ( 102) 00:32:08.479 6.847 - 6.875: 33.6987% ( 123) 00:32:08.479 6.875 - 6.903: 34.7778% ( 127) 00:32:08.479 6.903 - 6.931: 36.0863% ( 154) 00:32:08.479 6.931 - 6.959: 37.7517% ( 196) 00:32:08.479 6.959 - 6.987: 39.4086% ( 195) 00:32:08.479 6.987 - 7.015: 41.2609% ( 218) 00:32:08.479 7.015 - 7.043: 43.3597% ( 247) 00:32:08.479 7.043 - 7.071: 45.4329% ( 244) 00:32:08.479 7.071 - 7.099: 47.8715% ( 287) 00:32:08.479 7.099 - 7.127: 51.0749% ( 377) 00:32:08.479 7.127 - 7.155: 54.3547% ( 386) 00:32:08.479 7.155 - 7.210: 60.5149% ( 725) 00:32:08.479 7.210 - 7.266: 65.5111% ( 588) 00:32:08.479 7.266 - 7.322: 68.4680% ( 348) 00:32:08.479 7.322 - 7.378: 70.5073% ( 240) 00:32:08.479 7.378 - 7.434: 72.2151% ( 201) 00:32:08.479 7.434 - 7.490: 73.6511% ( 169) 00:32:08.479 7.490 - 7.546: 75.0191% ( 161) 00:32:08.479 7.546 - 7.602: 75.9708% ( 112) 00:32:08.479 7.602 - 7.658: 76.8205% ( 100) 00:32:08.479 7.658 - 7.714: 77.5852% ( 90) 00:32:08.479 7.714 - 7.769: 78.3499% ( 90) 00:32:08.479 7.769 - 7.825: 79.3016% ( 112) 00:32:08.479 7.825 - 7.881: 80.5421% ( 146) 00:32:08.479 7.881 - 7.937: 81.4768% ( 110) 00:32:08.479 7.937 - 7.993: 82.5729% ( 129) 00:32:08.479 7.993 - 8.049: 83.4565% ( 104) 00:32:08.479 8.049 - 8.105: 84.0428% ( 69) 00:32:08.479 8.105 - 8.161: 84.7056% ( 78) 00:32:08.479 8.161 - 8.217: 85.3853% ( 80) 00:32:08.479 8.217 - 8.272: 85.8527% ( 55) 00:32:08.479 8.272 - 8.328: 86.2095% ( 42) 00:32:08.479 8.328 - 8.384: 86.4899% ( 33) 00:32:08.479 8.384 - 8.440: 86.7533% ( 31) 00:32:08.479 8.440 - 8.496: 86.8978% ( 17) 00:32:08.479 8.496 - 8.552: 87.1017% ( 24) 00:32:08.479 8.552 - 8.608: 87.2292% ( 15) 00:32:08.479 8.608 - 8.664: 87.4076% ( 21) 00:32:08.479 8.664 - 8.720: 87.4841% ( 9) 00:32:08.479 8.720 - 8.776: 87.5775% ( 11) 00:32:08.479 8.776 - 8.831: 87.6370% ( 7) 00:32:08.479 8.831 - 8.887: 87.7050% ( 8) 00:32:08.479 8.887 - 8.943: 87.7645% ( 7) 00:32:08.479 8.943 - 8.999: 87.7985% ( 4) 00:32:08.479 8.999 - 9.055: 87.8834% ( 10) 00:32:08.479 9.055 - 9.111: 88.1553% ( 32) 00:32:08.479 9.111 - 9.167: 88.9795% ( 97) 00:32:08.479 9.167 - 9.223: 90.3730% ( 164) 00:32:08.479 9.223 - 9.279: 91.2652% ( 105) 00:32:08.479 9.279 - 9.334: 92.0894% ( 97) 00:32:08.479 9.334 - 9.390: 92.6332% ( 64) 00:32:08.479 9.390 - 9.446: 92.9816% ( 41) 00:32:08.479 9.446 - 9.502: 93.2025% ( 26) 00:32:08.479 9.502 - 9.558: 93.3299% ( 15) 00:32:08.479 9.558 - 9.614: 93.3979% ( 8) 00:32:08.479 9.614 - 9.670: 93.4319% ( 4) 00:32:08.479 9.670 - 9.726: 93.4659% ( 4) 00:32:08.479 9.726 - 9.782: 93.5169% ( 6) 00:32:08.479 9.782 - 9.838: 93.5594% ( 5) 00:32:08.479 9.838 - 9.893: 93.5848% ( 3) 00:32:08.479 9.893 - 9.949: 93.6188% ( 4) 00:32:08.479 9.949 - 10.005: 93.7123% ( 11) 00:32:08.479 10.005 - 10.061: 93.8397% ( 15) 00:32:08.479 10.061 - 10.117: 93.8907% ( 6) 00:32:08.479 10.117 - 10.173: 93.9587% ( 8) 00:32:08.479 10.173 - 10.229: 94.0267% ( 8) 00:32:08.479 10.229 - 10.285: 94.1032% ( 9) 00:32:08.479 10.285 - 10.341: 94.1286% ( 3) 00:32:08.479 10.341 - 10.397: 94.1711% ( 5) 00:32:08.479 10.397 - 10.452: 94.2051% ( 4) 00:32:08.479 10.452 - 10.508: 94.2306% ( 3) 00:32:08.479 10.508 - 10.564: 94.2561% ( 3) 00:32:08.479 10.564 - 10.620: 94.2731% ( 2) 00:32:08.479 10.620 - 10.676: 94.2901% ( 2) 00:32:08.479 10.788 - 10.844: 94.2986% ( 1) 00:32:08.479 10.844 - 10.900: 94.3071% ( 1) 00:32:08.479 10.900 - 10.955: 94.3156% ( 1) 00:32:08.479 10.955 - 11.011: 94.3241% ( 1) 00:32:08.479 11.067 - 11.123: 94.3326% ( 1) 00:32:08.479 11.235 - 11.291: 94.3411% ( 1) 00:32:08.479 11.291 - 11.347: 94.3496% ( 1) 00:32:08.479 11.403 - 11.459: 94.3581% ( 1) 00:32:08.479 11.570 - 11.626: 94.3666% ( 1) 00:32:08.479 11.682 - 11.738: 94.3751% ( 1) 00:32:08.479 11.738 - 11.794: 94.3836% ( 1) 00:32:08.479 11.850 - 11.906: 94.3920% ( 1) 00:32:08.479 11.906 - 11.962: 94.4005% ( 1) 00:32:08.479 11.962 - 12.017: 94.4175% ( 2) 00:32:08.479 12.017 - 12.073: 94.4260% ( 1) 00:32:08.479 12.073 - 12.129: 94.4430% ( 2) 00:32:08.479 12.185 - 12.241: 94.4515% ( 1) 00:32:08.479 12.353 - 12.409: 94.4685% ( 2) 00:32:08.479 12.409 - 12.465: 94.4770% ( 1) 00:32:08.479 12.465 - 12.521: 94.5110% ( 4) 00:32:08.479 12.521 - 12.576: 94.5195% ( 1) 00:32:08.479 12.576 - 12.632: 94.5365% ( 2) 00:32:08.479 12.632 - 12.688: 94.5450% ( 1) 00:32:08.479 12.688 - 12.744: 94.5535% ( 1) 00:32:08.479 12.744 - 12.800: 94.5620% ( 1) 00:32:08.479 12.800 - 12.856: 94.5705% ( 1) 00:32:08.479 12.856 - 12.912: 94.6300% ( 7) 00:32:08.479 12.912 - 12.968: 94.6639% ( 4) 00:32:08.479 12.968 - 13.024: 94.7064% ( 5) 00:32:08.479 13.024 - 13.079: 94.7574% ( 6) 00:32:08.479 13.079 - 13.135: 94.7829% ( 3) 00:32:08.479 13.135 - 13.191: 94.8254% ( 5) 00:32:08.479 13.191 - 13.247: 94.8594% ( 4) 00:32:08.479 13.247 - 13.303: 94.9104% ( 6) 00:32:08.479 13.303 - 13.359: 94.9613% ( 6) 00:32:08.479 13.359 - 13.415: 94.9953% ( 4) 00:32:08.479 13.415 - 13.471: 95.0633% ( 8) 00:32:08.479 13.471 - 13.527: 95.1398% ( 9) 00:32:08.479 13.527 - 13.583: 95.2247% ( 10) 00:32:08.479 13.583 - 13.638: 95.2672% ( 5) 00:32:08.479 13.638 - 13.694: 95.3267% ( 7) 00:32:08.479 13.694 - 13.750: 95.3692% ( 5) 00:32:08.479 13.750 - 13.806: 95.3947% ( 3) 00:32:08.479 13.806 - 13.862: 95.4627% ( 8) 00:32:08.479 13.862 - 13.918: 95.5476% ( 10) 00:32:08.479 13.918 - 13.974: 95.6156% ( 8) 00:32:08.479 13.974 - 14.030: 95.6836% ( 8) 00:32:08.479 14.030 - 14.086: 95.7516% ( 8) 00:32:08.479 14.086 - 14.141: 95.8195% ( 8) 00:32:08.479 14.141 - 14.197: 95.8705% ( 6) 00:32:08.479 14.197 - 14.253: 95.9470% ( 9) 00:32:08.479 14.253 - 14.309: 95.9725% ( 3) 00:32:08.479 14.309 - 14.421: 96.0489% ( 9) 00:32:08.479 14.421 - 14.533: 96.1169% ( 8) 00:32:08.479 14.533 - 14.645: 96.1764% ( 7) 00:32:08.479 14.645 - 14.756: 96.2444% ( 8) 00:32:08.479 14.756 - 14.868: 96.3208% ( 9) 00:32:08.479 14.868 - 14.980: 96.3973% ( 9) 00:32:08.479 14.980 - 15.092: 96.4398% ( 5) 00:32:08.479 15.092 - 15.203: 96.5503% ( 13) 00:32:08.479 15.203 - 15.315: 96.6012% ( 6) 00:32:08.479 15.315 - 15.427: 96.7372% ( 16) 00:32:08.479 15.427 - 15.539: 96.8307% ( 11) 00:32:08.479 15.539 - 15.651: 96.8901% ( 7) 00:32:08.479 15.651 - 15.762: 97.0431% ( 18) 00:32:08.479 15.762 - 15.874: 97.1111% ( 8) 00:32:08.479 15.874 - 15.986: 97.1620% ( 6) 00:32:08.479 15.986 - 16.098: 97.3065% ( 17) 00:32:08.479 16.098 - 16.210: 97.3915% ( 10) 00:32:08.479 16.210 - 16.321: 97.5359% ( 17) 00:32:08.479 16.321 - 16.433: 97.5869% ( 6) 00:32:08.479 16.433 - 16.545: 97.7483% ( 19) 00:32:08.479 16.545 - 16.657: 97.9438% ( 23) 00:32:08.479 16.657 - 16.769: 98.0202% ( 9) 00:32:08.479 16.769 - 16.880: 98.1137% ( 11) 00:32:08.480 16.880 - 16.992: 98.1817% ( 8) 00:32:08.480 16.992 - 17.104: 98.2581% ( 9) 00:32:08.480 17.104 - 17.216: 98.3516% ( 11) 00:32:08.480 17.216 - 17.328: 98.4366% ( 10) 00:32:08.480 17.328 - 17.439: 98.5300% ( 11) 00:32:08.480 17.439 - 17.551: 98.5895% ( 7) 00:32:08.480 17.551 - 17.663: 98.6745% ( 10) 00:32:08.480 17.663 - 17.775: 98.7000% ( 3) 00:32:08.480 17.775 - 17.886: 98.7764% ( 9) 00:32:08.480 17.886 - 17.998: 98.8274% ( 6) 00:32:08.480 17.998 - 18.110: 98.9039% ( 9) 00:32:08.480 18.110 - 18.222: 98.9549% ( 6) 00:32:08.480 18.222 - 18.334: 99.0144% ( 7) 00:32:08.480 18.334 - 18.445: 99.0568% ( 5) 00:32:08.480 18.445 - 18.557: 99.0908% ( 4) 00:32:08.480 18.557 - 18.669: 99.1418% ( 6) 00:32:08.480 18.669 - 18.781: 99.1758% ( 4) 00:32:08.480 18.781 - 18.893: 99.2183% ( 5) 00:32:08.480 18.893 - 19.004: 99.2353% ( 2) 00:32:08.480 19.004 - 19.116: 99.2863% ( 6) 00:32:08.480 19.116 - 19.228: 99.3202% ( 4) 00:32:08.480 19.228 - 19.340: 99.3287% ( 1) 00:32:08.480 19.340 - 19.452: 99.3797% ( 6) 00:32:08.480 19.452 - 19.563: 99.3967% ( 2) 00:32:08.480 19.563 - 19.675: 99.4222% ( 3) 00:32:08.480 19.675 - 19.787: 99.4392% ( 2) 00:32:08.480 19.787 - 19.899: 99.4562% ( 2) 00:32:08.480 19.899 - 20.010: 99.4817% ( 3) 00:32:08.480 20.010 - 20.122: 99.5242% ( 5) 00:32:08.480 20.234 - 20.346: 99.5497% ( 3) 00:32:08.480 20.346 - 20.458: 99.5921% ( 5) 00:32:08.480 20.458 - 20.569: 99.6346% ( 5) 00:32:08.480 20.569 - 20.681: 99.6686% ( 4) 00:32:08.480 20.681 - 20.793: 99.6771% ( 1) 00:32:08.480 20.793 - 20.905: 99.6856% ( 1) 00:32:08.480 20.905 - 21.017: 99.6941% ( 1) 00:32:08.480 21.017 - 21.128: 99.7196% ( 3) 00:32:08.480 21.128 - 21.240: 99.7536% ( 4) 00:32:08.480 21.240 - 21.352: 99.7706% ( 2) 00:32:08.480 21.352 - 21.464: 99.7791% ( 1) 00:32:08.480 21.464 - 21.576: 99.8046% ( 3) 00:32:08.480 21.576 - 21.687: 99.8216% ( 2) 00:32:08.480 21.687 - 21.799: 99.8471% ( 3) 00:32:08.480 21.799 - 21.911: 99.8556% ( 1) 00:32:08.480 22.134 - 22.246: 99.8725% ( 2) 00:32:08.480 22.246 - 22.358: 99.8895% ( 2) 00:32:08.480 22.582 - 22.693: 99.8980% ( 1) 00:32:08.480 22.805 - 22.917: 99.9065% ( 1) 00:32:08.480 23.252 - 23.364: 99.9150% ( 1) 00:32:08.480 23.700 - 23.811: 99.9235% ( 1) 00:32:08.480 24.706 - 24.817: 99.9320% ( 1) 00:32:08.480 26.383 - 26.494: 99.9405% ( 1) 00:32:08.480 27.389 - 27.500: 99.9490% ( 1) 00:32:08.480 31.525 - 31.748: 99.9575% ( 1) 00:32:08.480 33.090 - 33.314: 99.9660% ( 1) 00:32:08.480 33.984 - 34.208: 99.9745% ( 1) 00:32:08.480 42.928 - 43.151: 99.9830% ( 1) 00:32:08.480 45.610 - 45.834: 99.9915% ( 1) 00:32:08.480 61.708 - 62.155: 100.0000% ( 1) 00:32:08.480 00:32:08.480 ************************************ 00:32:08.480 END TEST nvme_overhead 00:32:08.480 ************************************ 00:32:08.480 00:32:08.480 real 0m1.319s 00:32:08.480 user 0m1.115s 00:32:08.480 sys 0m0.125s 00:32:08.480 13:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:08.480 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.480 13:55:47 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:08.480 13:55:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:32:08.480 13:55:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:08.480 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:32:08.480 ************************************ 00:32:08.480 START TEST nvme_arbitration 00:32:08.480 ************************************ 00:32:08.480 13:55:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:12.699 Initializing NVMe Controllers 00:32:12.699 Attached to 0000:00:06.0 00:32:12.699 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:32:12.699 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:32:12.699 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:32:12.699 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:32:12.699 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:32:12.699 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:32:12.699 Initialization complete. Launching workers. 00:32:12.699 Starting thread on core 1 with urgent priority queue 00:32:12.699 Starting thread on core 2 with urgent priority queue 00:32:12.699 Starting thread on core 3 with urgent priority queue 00:32:12.699 Starting thread on core 0 with urgent priority queue 00:32:12.699 QEMU NVMe Ctrl (12340 ) core 0: 1002.67 IO/s 99.73 secs/100000 ios 00:32:12.699 QEMU NVMe Ctrl (12340 ) core 1: 1045.33 IO/s 95.66 secs/100000 ios 00:32:12.699 QEMU NVMe Ctrl (12340 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:32:12.699 QEMU NVMe Ctrl (12340 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:32:12.699 ======================================================== 00:32:12.699 00:32:12.699 ************************************ 00:32:12.699 END TEST nvme_arbitration 00:32:12.699 ************************************ 00:32:12.699 00:32:12.699 real 0m3.543s 00:32:12.699 user 0m9.675s 00:32:12.699 sys 0m0.140s 00:32:12.699 13:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.699 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:32:12.699 13:55:51 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:32:12.699 13:55:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:32:12.699 13:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:12.699 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:32:12.699 ************************************ 00:32:12.699 START TEST nvme_single_aen 00:32:12.699 ************************************ 00:32:12.699 13:55:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:32:12.699 [2024-07-10 13:55:51.245353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:12.699 [2024-07-10 13:55:51.245524] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.699 [2024-07-10 13:55:51.479036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:12.699 Asynchronous Event Request test 00:32:12.699 Attached to 0000:00:06.0 00:32:12.699 Reset controller to setup AER completions for this process 00:32:12.699 Registering asynchronous event callbacks... 00:32:12.699 Getting orig temperature thresholds of all controllers 00:32:12.699 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:12.699 Setting all controllers temperature threshold low to trigger AER 00:32:12.699 Waiting for all controllers temperature threshold to be set lower 00:32:12.699 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:12.699 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:32:12.699 Waiting for all controllers to trigger AER and reset threshold 00:32:12.699 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:12.699 Cleaning up... 00:32:12.699 ************************************ 00:32:12.699 END TEST nvme_single_aen 00:32:12.699 ************************************ 00:32:12.699 00:32:12.699 real 0m0.333s 00:32:12.699 user 0m0.113s 00:32:12.699 sys 0m0.117s 00:32:12.699 13:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.699 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:32:12.699 13:55:51 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:32:12.699 13:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:12.699 13:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:12.699 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:32:12.699 ************************************ 00:32:12.699 START TEST nvme_doorbell_aers 00:32:12.699 ************************************ 00:32:12.699 13:55:51 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:32:12.699 13:55:51 -- nvme/nvme.sh@70 -- # bdfs=() 00:32:12.699 13:55:51 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:32:12.699 13:55:51 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:32:12.699 13:55:51 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:32:12.699 13:55:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:12.699 13:55:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:12.699 13:55:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:12.699 13:55:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:12.699 13:55:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:12.699 13:55:51 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:12.699 13:55:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:32:12.699 13:55:51 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:32:12.699 13:55:51 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:12.699 [2024-07-10 13:55:51.958494] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144695) is not found. Dropping the request. 00:32:22.674 Executing: test_write_invalid_db 00:32:22.674 Waiting for AER completion... 00:32:22.674 Failure: test_write_invalid_db 00:32:22.674 00:32:22.674 Executing: test_invalid_db_write_overflow_sq 00:32:22.674 Waiting for AER completion... 00:32:22.674 Failure: test_invalid_db_write_overflow_sq 00:32:22.674 00:32:22.674 Executing: test_invalid_db_write_overflow_cq 00:32:22.674 Waiting for AER completion... 00:32:22.674 Failure: test_invalid_db_write_overflow_cq 00:32:22.674 00:32:22.674 ************************************ 00:32:22.674 END TEST nvme_doorbell_aers 00:32:22.674 ************************************ 00:32:22.674 00:32:22.674 real 0m10.114s 00:32:22.674 user 0m8.941s 00:32:22.674 sys 0m1.085s 00:32:22.674 13:56:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.674 13:56:01 -- common/autotest_common.sh@10 -- # set +x 00:32:22.674 13:56:01 -- nvme/nvme.sh@97 -- # uname 00:32:22.674 13:56:01 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:32:22.674 13:56:01 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:32:22.674 13:56:01 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:32:22.674 13:56:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:22.674 13:56:01 -- common/autotest_common.sh@10 -- # set +x 00:32:22.675 ************************************ 00:32:22.675 START TEST nvme_multi_aen 00:32:22.675 ************************************ 00:32:22.675 13:56:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:32:22.675 [2024-07-10 13:56:01.801420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:22.675 [2024-07-10 13:56:01.801636] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.675 [2024-07-10 13:56:01.995218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:22.675 [2024-07-10 13:56:01.995343] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144695) is not found. Dropping the request. 00:32:22.675 [2024-07-10 13:56:01.995458] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144695) is not found. Dropping the request. 00:32:22.675 [2024-07-10 13:56:01.995511] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144695) is not found. Dropping the request. 00:32:22.675 [2024-07-10 13:56:02.000511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:22.675 [2024-07-10 13:56:02.000707] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.675 Child process pid: 144911 00:32:22.933 [Child] Asynchronous Event Request test 00:32:22.933 [Child] Attached to 0000:00:06.0 00:32:22.933 [Child] Registering asynchronous event callbacks... 00:32:22.933 [Child] Getting orig temperature thresholds of all controllers 00:32:22.933 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:22.933 [Child] Waiting for all controllers to trigger AER and reset threshold 00:32:22.933 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:22.933 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:22.933 [Child] Cleaning up... 00:32:23.192 Asynchronous Event Request test 00:32:23.192 Attached to 0000:00:06.0 00:32:23.192 Reset controller to setup AER completions for this process 00:32:23.192 Registering asynchronous event callbacks... 00:32:23.192 Getting orig temperature thresholds of all controllers 00:32:23.192 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:23.192 Setting all controllers temperature threshold low to trigger AER 00:32:23.192 Waiting for all controllers temperature threshold to be set lower 00:32:23.192 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:23.192 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:32:23.192 Waiting for all controllers to trigger AER and reset threshold 00:32:23.192 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:23.192 Cleaning up... 00:32:23.192 ************************************ 00:32:23.192 END TEST nvme_multi_aen 00:32:23.192 ************************************ 00:32:23.192 00:32:23.192 real 0m0.615s 00:32:23.192 user 0m0.191s 00:32:23.192 sys 0m0.250s 00:32:23.192 13:56:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.192 13:56:02 -- common/autotest_common.sh@10 -- # set +x 00:32:23.192 13:56:02 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:23.192 13:56:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:32:23.192 13:56:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:23.192 13:56:02 -- common/autotest_common.sh@10 -- # set +x 00:32:23.192 ************************************ 00:32:23.192 START TEST nvme_startup 00:32:23.192 ************************************ 00:32:23.192 13:56:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:23.452 Initializing NVMe Controllers 00:32:23.452 Attached to 0000:00:06.0 00:32:23.452 Initialization complete. 00:32:23.452 Time used:186175.281 (us). 00:32:23.452 ************************************ 00:32:23.452 END TEST nvme_startup 00:32:23.452 ************************************ 00:32:23.452 00:32:23.452 real 0m0.287s 00:32:23.452 user 0m0.075s 00:32:23.452 sys 0m0.145s 00:32:23.452 13:56:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.452 13:56:02 -- common/autotest_common.sh@10 -- # set +x 00:32:23.452 13:56:02 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:32:23.452 13:56:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:23.452 13:56:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:23.452 13:56:02 -- common/autotest_common.sh@10 -- # set +x 00:32:23.452 ************************************ 00:32:23.452 START TEST nvme_multi_secondary 00:32:23.452 ************************************ 00:32:23.452 13:56:02 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:32:23.452 13:56:02 -- nvme/nvme.sh@52 -- # pid0=144970 00:32:23.452 13:56:02 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:32:23.452 13:56:02 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:23.452 13:56:02 -- nvme/nvme.sh@54 -- # pid1=144971 00:32:23.452 13:56:02 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:32:26.850 Initializing NVMe Controllers 00:32:26.850 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:26.850 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:32:26.850 Initialization complete. Launching workers. 00:32:26.850 ======================================================== 00:32:26.850 Latency(us) 00:32:26.850 Device Information : IOPS MiB/s Average min max 00:32:26.850 PCIE (0000:00:06.0) NSID 1 from core 2: 17851.31 69.73 894.47 141.71 17086.97 00:32:26.850 ======================================================== 00:32:26.850 Total : 17851.31 69.73 894.47 141.71 17086.97 00:32:26.850 00:32:26.850 13:56:06 -- nvme/nvme.sh@56 -- # wait 144970 00:32:27.110 Initializing NVMe Controllers 00:32:27.110 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:27.110 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:32:27.110 Initialization complete. Launching workers. 00:32:27.110 ======================================================== 00:32:27.110 Latency(us) 00:32:27.110 Device Information : IOPS MiB/s Average min max 00:32:27.110 PCIE (0000:00:06.0) NSID 1 from core 1: 41882.65 163.60 381.64 128.29 5088.88 00:32:27.110 ======================================================== 00:32:27.110 Total : 41882.65 163.60 381.64 128.29 5088.88 00:32:27.110 00:32:29.016 Initializing NVMe Controllers 00:32:29.016 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:29.016 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:29.016 Initialization complete. Launching workers. 00:32:29.016 ======================================================== 00:32:29.016 Latency(us) 00:32:29.016 Device Information : IOPS MiB/s Average min max 00:32:29.016 PCIE (0000:00:06.0) NSID 1 from core 0: 49519.00 193.43 322.78 120.27 5073.10 00:32:29.016 ======================================================== 00:32:29.016 Total : 49519.00 193.43 322.78 120.27 5073.10 00:32:29.016 00:32:29.016 13:56:08 -- nvme/nvme.sh@57 -- # wait 144971 00:32:29.016 13:56:08 -- nvme/nvme.sh@61 -- # pid0=145065 00:32:29.016 13:56:08 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:32:29.016 13:56:08 -- nvme/nvme.sh@63 -- # pid1=145066 00:32:29.016 13:56:08 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:32:29.016 13:56:08 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:33.206 Initializing NVMe Controllers 00:32:33.206 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:33.206 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:32:33.206 Initialization complete. Launching workers. 00:32:33.206 ======================================================== 00:32:33.206 Latency(us) 00:32:33.206 Device Information : IOPS MiB/s Average min max 00:32:33.206 PCIE (0000:00:06.0) NSID 1 from core 1: 41093.54 160.52 389.01 133.44 1500.60 00:32:33.206 ======================================================== 00:32:33.206 Total : 41093.54 160.52 389.01 133.44 1500.60 00:32:33.206 00:32:33.206 Initializing NVMe Controllers 00:32:33.206 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:33.206 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:32:33.206 Initialization complete. Launching workers. 00:32:33.206 ======================================================== 00:32:33.206 Latency(us) 00:32:33.206 Device Information : IOPS MiB/s Average min max 00:32:33.206 PCIE (0000:00:06.0) NSID 1 from core 0: 40202.67 157.04 397.59 127.52 1496.09 00:32:33.206 ======================================================== 00:32:33.206 Total : 40202.67 157.04 397.59 127.52 1496.09 00:32:33.206 00:32:34.582 Initializing NVMe Controllers 00:32:34.582 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:32:34.582 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:32:34.582 Initialization complete. Launching workers. 00:32:34.582 ======================================================== 00:32:34.582 Latency(us) 00:32:34.582 Device Information : IOPS MiB/s Average min max 00:32:34.582 PCIE (0000:00:06.0) NSID 1 from core 2: 18704.50 73.06 855.16 139.70 20582.32 00:32:34.582 ======================================================== 00:32:34.582 Total : 18704.50 73.06 855.16 139.70 20582.32 00:32:34.582 00:32:34.582 ************************************ 00:32:34.582 END TEST nvme_multi_secondary 00:32:34.582 ************************************ 00:32:34.582 13:56:13 -- nvme/nvme.sh@65 -- # wait 145065 00:32:34.582 13:56:13 -- nvme/nvme.sh@66 -- # wait 145066 00:32:34.582 00:32:34.582 real 0m10.778s 00:32:34.582 user 0m18.565s 00:32:34.582 sys 0m0.863s 00:32:34.582 13:56:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.582 13:56:13 -- common/autotest_common.sh@10 -- # set +x 00:32:34.582 13:56:13 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:32:34.582 13:56:13 -- nvme/nvme.sh@102 -- # kill_stub 00:32:34.582 13:56:13 -- common/autotest_common.sh@1065 -- # [[ -e /proc/144248 ]] 00:32:34.582 13:56:13 -- common/autotest_common.sh@1066 -- # kill 144248 00:32:34.582 13:56:13 -- common/autotest_common.sh@1067 -- # wait 144248 00:32:35.149 [2024-07-10 13:56:14.334215] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144910) is not found. Dropping the request. 00:32:35.149 [2024-07-10 13:56:14.335114] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144910) is not found. Dropping the request. 00:32:35.149 [2024-07-10 13:56:14.335413] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144910) is not found. Dropping the request. 00:32:35.149 [2024-07-10 13:56:14.335562] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 144910) is not found. Dropping the request. 00:32:35.408 13:56:14 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:32:35.408 13:56:14 -- common/autotest_common.sh@1073 -- # echo 2 00:32:35.408 13:56:14 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:35.408 13:56:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:35.408 13:56:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:35.408 13:56:14 -- common/autotest_common.sh@10 -- # set +x 00:32:35.408 ************************************ 00:32:35.408 START TEST bdev_nvme_reset_stuck_adm_cmd 00:32:35.408 ************************************ 00:32:35.408 13:56:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:35.408 * Looking for test storage... 00:32:35.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:35.408 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:32:35.408 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:32:35.408 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:32:35.408 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:32:35.408 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:32:35.408 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:32:35.408 13:56:14 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:35.408 13:56:14 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:35.408 13:56:14 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:35.408 13:56:14 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:35.408 13:56:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:35.408 13:56:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:35.408 13:56:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:35.408 13:56:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:35.408 13:56:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:35.667 13:56:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:35.667 13:56:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:32:35.667 13:56:14 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:32:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.667 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:32:35.667 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:32:35.667 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:32:35.667 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=145217 00:32:35.667 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:35.667 13:56:14 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 145217 00:32:35.667 13:56:14 -- common/autotest_common.sh@819 -- # '[' -z 145217 ']' 00:32:35.667 13:56:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.667 13:56:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:35.667 13:56:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.667 13:56:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:35.667 13:56:14 -- common/autotest_common.sh@10 -- # set +x 00:32:35.667 [2024-07-10 13:56:14.846826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:35.667 [2024-07-10 13:56:14.847047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145217 ] 00:32:35.926 [2024-07-10 13:56:15.029415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:35.926 [2024-07-10 13:56:15.242078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:35.926 [2024-07-10 13:56:15.242864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.926 [2024-07-10 13:56:15.243038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.926 [2024-07-10 13:56:15.243177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:35.926 [2024-07-10 13:56:15.243166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.304 13:56:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:37.304 13:56:16 -- common/autotest_common.sh@852 -- # return 0 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:32:37.304 13:56:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.304 13:56:16 -- common/autotest_common.sh@10 -- # set +x 00:32:37.304 nvme0n1 00:32:37.304 13:56:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_eXxKa.txt 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:32:37.304 13:56:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.304 13:56:16 -- common/autotest_common.sh@10 -- # set +x 00:32:37.304 true 00:32:37.304 13:56:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720619776 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=145261 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:32:37.304 13:56:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:39.208 13:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.208 13:56:18 -- common/autotest_common.sh@10 -- # set +x 00:32:39.208 [2024-07-10 13:56:18.481253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:32:39.208 [2024-07-10 13:56:18.482471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:39.208 [2024-07-10 13:56:18.482592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:32:39.208 [2024-07-10 13:56:18.482648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.208 [2024-07-10 13:56:18.484420] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:39.208 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 145261 00:32:39.208 13:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 145261 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 145261 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.208 13:56:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.208 13:56:18 -- common/autotest_common.sh@10 -- # set +x 00:32:39.208 13:56:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:32:39.208 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_eXxKa.txt 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_eXxKa.txt 00:32:39.468 13:56:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 145217 00:32:39.468 13:56:18 -- common/autotest_common.sh@926 -- # '[' -z 145217 ']' 00:32:39.468 13:56:18 -- common/autotest_common.sh@930 -- # kill -0 145217 00:32:39.468 13:56:18 -- common/autotest_common.sh@931 -- # uname 00:32:39.468 13:56:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:39.468 13:56:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145217 00:32:39.468 killing process with pid 145217 00:32:39.468 13:56:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:39.468 13:56:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:39.468 13:56:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145217' 00:32:39.468 13:56:18 -- common/autotest_common.sh@945 -- # kill 145217 00:32:39.468 13:56:18 -- common/autotest_common.sh@950 -- # wait 145217 00:32:42.006 13:56:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:32:42.006 13:56:21 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:32:42.006 00:32:42.006 real 0m6.580s 00:32:42.006 user 0m23.219s 00:32:42.006 sys 0m0.652s 00:32:42.006 13:56:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:42.006 ************************************ 00:32:42.006 END TEST bdev_nvme_reset_stuck_adm_cmd 00:32:42.006 ************************************ 00:32:42.006 13:56:21 -- common/autotest_common.sh@10 -- # set +x 00:32:42.006 13:56:21 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:32:42.006 13:56:21 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:32:42.006 13:56:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:42.006 13:56:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:42.006 13:56:21 -- common/autotest_common.sh@10 -- # set +x 00:32:42.006 ************************************ 00:32:42.006 START TEST nvme_fio 00:32:42.006 ************************************ 00:32:42.006 13:56:21 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:32:42.006 13:56:21 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:32:42.006 13:56:21 -- nvme/nvme.sh@32 -- # ran_fio=false 00:32:42.006 13:56:21 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:32:42.006 13:56:21 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:32:42.006 13:56:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:42.006 13:56:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:42.006 13:56:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:42.006 13:56:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:42.006 13:56:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:42.006 13:56:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:42.006 13:56:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:32:42.006 13:56:21 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:32:42.006 13:56:21 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:42.006 13:56:21 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:42.006 13:56:21 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:42.265 13:56:21 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:32:42.265 13:56:21 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:42.525 13:56:21 -- nvme/nvme.sh@41 -- # bs=4096 00:32:42.525 13:56:21 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:32:42.525 13:56:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:32:42.525 13:56:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:42.525 13:56:21 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:32:42.525 13:56:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:42.525 13:56:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:42.525 13:56:21 -- common/autotest_common.sh@1320 -- # shift 00:32:42.525 13:56:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:42.525 13:56:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:42.525 13:56:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:42.525 13:56:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:42.525 13:56:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:42.525 13:56:21 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:32:42.525 13:56:21 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:32:42.525 13:56:21 -- common/autotest_common.sh@1326 -- # break 00:32:42.525 13:56:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:42.525 13:56:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:32:42.784 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:42.784 fio-3.35 00:32:42.784 Starting 1 thread 00:32:49.411 00:32:49.411 test: (groupid=0, jobs=1): err= 0: pid=145426: Wed Jul 10 13:56:27 2024 00:32:49.411 read: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec) 00:32:49.411 slat (usec): min=4, max=112, avg= 5.00, stdev= 1.18 00:32:49.411 clat (usec): min=276, max=12788, avg=2734.69, stdev=430.49 00:32:49.411 lat (usec): min=282, max=12900, avg=2739.70, stdev=431.22 00:32:49.411 clat percentiles (usec): 00:32:49.411 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:32:49.411 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:32:49.411 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2868], 95.00th=[ 2966], 00:32:49.411 | 99.00th=[ 4178], 99.50th=[ 5932], 99.90th=[ 7701], 99.95th=[ 9503], 00:32:49.411 | 99.99th=[12387] 00:32:49.411 bw ( KiB/s): min=88384, max=95808, per=99.62%, avg=92952.00, stdev=3997.14, samples=3 00:32:49.411 iops : min=22096, max=23952, avg=23238.00, stdev=999.29, samples=3 00:32:49.411 write: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(181MiB/2001msec); 0 zone resets 00:32:49.411 slat (nsec): min=4587, max=52643, avg=5211.22, stdev=1022.85 00:32:49.411 clat (usec): min=207, max=12557, avg=2742.39, stdev=438.16 00:32:49.411 lat (usec): min=213, max=12591, avg=2747.60, stdev=438.85 00:32:49.411 clat percentiles (usec): 00:32:49.411 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:32:49.411 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:32:49.411 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2999], 00:32:49.411 | 99.00th=[ 4228], 99.50th=[ 5932], 99.90th=[ 7767], 99.95th=[ 9896], 00:32:49.411 | 99.99th=[12125] 00:32:49.411 bw ( KiB/s): min=87784, max=96384, per=100.00%, avg=93077.33, stdev=4631.43, samples=3 00:32:49.411 iops : min=21946, max=24096, avg=23269.33, stdev=1157.86, samples=3 00:32:49.411 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:49.411 lat (msec) : 2=0.42%, 4=98.33%, 10=1.16%, 20=0.05% 00:32:49.411 cpu : usr=100.15%, sys=0.00%, ctx=4, majf=0, minf=37 00:32:49.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:49.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:49.411 issued rwts: total=46678,46363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:49.411 00:32:49.411 Run status group 0 (all jobs): 00:32:49.411 READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:32:49.411 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=181MiB (190MB), run=2001-2001msec 00:32:49.411 ----------------------------------------------------- 00:32:49.411 Suppressions used: 00:32:49.411 count bytes template 00:32:49.411 1 32 /usr/src/fio/parse.c 00:32:49.411 ----------------------------------------------------- 00:32:49.411 00:32:49.411 ************************************ 00:32:49.411 END TEST nvme_fio 00:32:49.411 ************************************ 00:32:49.411 13:56:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:49.411 13:56:28 -- nvme/nvme.sh@46 -- # true 00:32:49.411 00:32:49.411 real 0m6.751s 00:32:49.411 user 0m4.395s 00:32:49.411 sys 0m4.451s 00:32:49.411 13:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:49.411 13:56:28 -- common/autotest_common.sh@10 -- # set +x 00:32:49.411 ************************************ 00:32:49.411 END TEST nvme 00:32:49.411 ************************************ 00:32:49.411 00:32:49.411 real 0m51.137s 00:32:49.411 user 2m11.147s 00:32:49.411 sys 0m11.835s 00:32:49.411 13:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:49.411 13:56:28 -- common/autotest_common.sh@10 -- # set +x 00:32:49.411 13:56:28 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:32:49.411 13:56:28 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:49.411 13:56:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:49.411 13:56:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:49.411 13:56:28 -- common/autotest_common.sh@10 -- # set +x 00:32:49.411 ************************************ 00:32:49.411 START TEST nvme_scc 00:32:49.411 ************************************ 00:32:49.411 13:56:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:49.411 * Looking for test storage... 00:32:49.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:49.411 13:56:28 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:49.411 13:56:28 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:49.411 13:56:28 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:49.411 13:56:28 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:49.411 13:56:28 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:49.411 13:56:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.411 13:56:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.411 13:56:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.411 13:56:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:49.411 13:56:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:49.411 13:56:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:49.411 13:56:28 -- paths/export.sh@5 -- # export PATH 00:32:49.411 13:56:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:49.411 13:56:28 -- nvme/functions.sh@10 -- # ctrls=() 00:32:49.411 13:56:28 -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:49.411 13:56:28 -- nvme/functions.sh@11 -- # nvmes=() 00:32:49.411 13:56:28 -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:49.411 13:56:28 -- nvme/functions.sh@12 -- # bdfs=() 00:32:49.411 13:56:28 -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:49.411 13:56:28 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:49.411 13:56:28 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:49.411 13:56:28 -- nvme/functions.sh@14 -- # nvme_name= 00:32:49.411 13:56:28 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:49.411 13:56:28 -- nvme/nvme_scc.sh@12 -- # uname 00:32:49.412 13:56:28 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:49.412 13:56:28 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:49.412 13:56:28 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:49.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:49.412 Waiting for block devices as requested 00:32:49.412 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.673 13:56:28 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:49.673 13:56:28 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:49.673 13:56:28 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:49.673 13:56:28 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:32:49.673 13:56:28 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:32:49.673 13:56:28 -- scripts/common.sh@15 -- # local i 00:32:49.673 13:56:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:32:49.673 13:56:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:32:49.673 13:56:28 -- scripts/common.sh@24 -- # return 0 00:32:49.673 13:56:28 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:49.673 13:56:28 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:49.673 13:56:28 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@18 -- # shift 00:32:49.673 13:56:28 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.673 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:49.673 13:56:28 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.673 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.674 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:49.674 13:56:28 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.674 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:49.675 13:56:28 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:49.675 13:56:28 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:49.675 13:56:28 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:49.675 13:56:28 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@18 -- # shift 00:32:49.675 13:56:28 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.675 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:49.675 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.675 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:49.676 13:56:28 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # IFS=: 00:32:49.676 13:56:28 -- nvme/functions.sh@21 -- # read -r reg val 00:32:49.676 13:56:28 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:49.677 13:56:28 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:49.677 13:56:28 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:32:49.677 13:56:28 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:32:49.677 13:56:28 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:49.677 13:56:28 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:32:49.677 13:56:28 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:49.677 13:56:28 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:32:49.677 13:56:28 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:32:49.677 13:56:28 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:32:49.677 13:56:28 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:32:49.677 13:56:28 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:32:49.677 13:56:28 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:32:49.677 13:56:28 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:32:49.677 13:56:28 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:49.677 13:56:28 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:49.677 13:56:28 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:49.677 13:56:28 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:49.677 13:56:28 -- nvme/functions.sh@76 -- # echo 0x15d 00:32:49.677 13:56:28 -- nvme/functions.sh@184 -- # oncs=0x15d 00:32:49.677 13:56:28 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:32:49.677 13:56:28 -- nvme/functions.sh@197 -- # echo nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:32:49.677 13:56:28 -- nvme/functions.sh@206 -- # echo nvme0 00:32:49.677 13:56:28 -- nvme/functions.sh@207 -- # return 0 00:32:49.677 13:56:28 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:32:49.677 13:56:28 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:32:49.677 13:56:28 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:49.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:50.195 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:32:51.133 13:56:30 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:32:51.133 13:56:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:32:51.134 13:56:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:51.134 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:32:51.134 ************************************ 00:32:51.134 START TEST nvme_simple_copy 00:32:51.134 ************************************ 00:32:51.134 13:56:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:32:51.454 Initializing NVMe Controllers 00:32:51.454 Attaching to 0000:00:06.0 00:32:51.454 Controller supports SCC. Attached to 0000:00:06.0 00:32:51.454 Namespace ID: 1 size: 5GB 00:32:51.454 Initialization complete. 00:32:51.454 00:32:51.454 Controller QEMU NVMe Ctrl (12340 ) 00:32:51.454 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:51.454 Namespace Block Size:4096 00:32:51.454 Writing LBAs 0 to 63 with Random Data 00:32:51.454 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:51.454 LBAs matching Written Data: 64 00:32:51.454 ************************************ 00:32:51.454 END TEST nvme_simple_copy 00:32:51.454 ************************************ 00:32:51.454 00:32:51.454 real 0m0.298s 00:32:51.454 user 0m0.129s 00:32:51.454 sys 0m0.072s 00:32:51.454 13:56:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.454 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:32:51.454 ************************************ 00:32:51.454 END TEST nvme_scc 00:32:51.454 ************************************ 00:32:51.454 00:32:51.454 real 0m2.602s 00:32:51.454 user 0m0.729s 00:32:51.454 sys 0m1.765s 00:32:51.454 13:56:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:51.454 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:32:51.454 13:56:30 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:32:51.454 13:56:30 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:32:51.454 13:56:30 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:32:51.454 13:56:30 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:32:51.454 13:56:30 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:32:51.454 13:56:30 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:51.454 13:56:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:51.454 13:56:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:51.454 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:32:51.454 ************************************ 00:32:51.454 START TEST nvme_rpc 00:32:51.454 ************************************ 00:32:51.454 13:56:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:51.712 * Looking for test storage... 00:32:51.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:51.712 13:56:30 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.712 13:56:30 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:51.712 13:56:30 -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:51.712 13:56:30 -- common/autotest_common.sh@1509 -- # local bdfs 00:32:51.712 13:56:30 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:51.712 13:56:30 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:51.712 13:56:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:51.712 13:56:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:51.713 13:56:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:51.713 13:56:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:51.713 13:56:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:51.713 13:56:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:51.713 13:56:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:32:51.713 13:56:30 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:32:51.713 13:56:30 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:32:51.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.713 13:56:30 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=145966 00:32:51.713 13:56:30 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:51.713 13:56:30 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 145966 00:32:51.713 13:56:30 -- common/autotest_common.sh@819 -- # '[' -z 145966 ']' 00:32:51.713 13:56:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.713 13:56:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:51.713 13:56:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.713 13:56:30 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:51.713 13:56:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:51.713 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:32:51.713 [2024-07-10 13:56:31.023641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:51.713 [2024-07-10 13:56:31.024258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145966 ] 00:32:51.971 [2024-07-10 13:56:31.188142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:52.230 [2024-07-10 13:56:31.392418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:52.230 [2024-07-10 13:56:31.393135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.230 [2024-07-10 13:56:31.393141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.166 13:56:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:53.166 13:56:32 -- common/autotest_common.sh@852 -- # return 0 00:32:53.166 13:56:32 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:32:53.426 Nvme0n1 00:32:53.426 13:56:32 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:53.426 13:56:32 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:53.685 request: 00:32:53.685 { 00:32:53.685 "filename": "non_existing_file", 00:32:53.685 "bdev_name": "Nvme0n1", 00:32:53.685 "method": "bdev_nvme_apply_firmware", 00:32:53.685 "req_id": 1 00:32:53.685 } 00:32:53.685 Got JSON-RPC error response 00:32:53.685 response: 00:32:53.685 { 00:32:53.685 "code": -32603, 00:32:53.685 "message": "open file failed." 00:32:53.685 } 00:32:53.685 13:56:32 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:53.685 13:56:32 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:53.685 13:56:32 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:53.943 13:56:33 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:53.943 13:56:33 -- nvme/nvme_rpc.sh@40 -- # killprocess 145966 00:32:53.943 13:56:33 -- common/autotest_common.sh@926 -- # '[' -z 145966 ']' 00:32:53.943 13:56:33 -- common/autotest_common.sh@930 -- # kill -0 145966 00:32:53.943 13:56:33 -- common/autotest_common.sh@931 -- # uname 00:32:53.943 13:56:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:53.943 13:56:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145966 00:32:53.943 killing process with pid 145966 00:32:53.943 13:56:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:53.943 13:56:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:53.943 13:56:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145966' 00:32:53.943 13:56:33 -- common/autotest_common.sh@945 -- # kill 145966 00:32:53.943 13:56:33 -- common/autotest_common.sh@950 -- # wait 145966 00:32:56.485 ************************************ 00:32:56.485 END TEST nvme_rpc 00:32:56.485 ************************************ 00:32:56.485 00:32:56.485 real 0m4.705s 00:32:56.485 user 0m8.907s 00:32:56.485 sys 0m0.559s 00:32:56.485 13:56:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.485 13:56:35 -- common/autotest_common.sh@10 -- # set +x 00:32:56.485 13:56:35 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:56.485 13:56:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:56.485 13:56:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:56.485 13:56:35 -- common/autotest_common.sh@10 -- # set +x 00:32:56.485 ************************************ 00:32:56.485 START TEST nvme_rpc_timeouts 00:32:56.485 ************************************ 00:32:56.485 13:56:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:56.485 * Looking for test storage... 00:32:56.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_146062 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_146062 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=146086 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:56.485 13:56:35 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 146086 00:32:56.485 13:56:35 -- common/autotest_common.sh@819 -- # '[' -z 146086 ']' 00:32:56.485 13:56:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.485 13:56:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:56.485 13:56:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.485 13:56:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:56.485 13:56:35 -- common/autotest_common.sh@10 -- # set +x 00:32:56.485 [2024-07-10 13:56:35.700165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:56.485 [2024-07-10 13:56:35.700372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146086 ] 00:32:56.743 [2024-07-10 13:56:35.850168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:56.743 [2024-07-10 13:56:36.049709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:56.743 [2024-07-10 13:56:36.050543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.743 [2024-07-10 13:56:36.050549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.118 Checking default timeout settings: 00:32:58.118 13:56:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:58.118 13:56:37 -- common/autotest_common.sh@852 -- # return 0 00:32:58.118 13:56:37 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:58.118 13:56:37 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:58.376 Making settings changes with rpc: 00:32:58.376 13:56:37 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:58.376 13:56:37 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:58.635 Check default vs. modified settings: 00:32:58.635 13:56:37 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:58.635 13:56:37 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_146062 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_146062 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:58.893 Setting action_on_timeout is changed as expected. 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_146062 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_146062 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:58.893 Setting timeout_us is changed as expected. 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_146062 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_146062 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:58.893 Setting timeout_admin_us is changed as expected. 00:32:58.893 13:56:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:58.894 13:56:38 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:58.894 13:56:38 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_146062 /tmp/settings_modified_146062 00:32:58.894 13:56:38 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 146086 00:32:58.894 13:56:38 -- common/autotest_common.sh@926 -- # '[' -z 146086 ']' 00:32:58.894 13:56:38 -- common/autotest_common.sh@930 -- # kill -0 146086 00:32:58.894 13:56:38 -- common/autotest_common.sh@931 -- # uname 00:32:58.894 13:56:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:58.894 13:56:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146086 00:32:58.894 killing process with pid 146086 00:32:58.894 13:56:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:58.894 13:56:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:58.894 13:56:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146086' 00:32:58.894 13:56:38 -- common/autotest_common.sh@945 -- # kill 146086 00:32:58.894 13:56:38 -- common/autotest_common.sh@950 -- # wait 146086 00:33:02.172 RPC TIMEOUT SETTING TEST PASSED. 00:33:02.172 13:56:40 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:33:02.172 00:33:02.172 real 0m5.298s 00:33:02.172 user 0m10.109s 00:33:02.172 sys 0m0.673s 00:33:02.172 ************************************ 00:33:02.172 END TEST nvme_rpc_timeouts 00:33:02.172 ************************************ 00:33:02.172 13:56:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.172 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:33:02.172 13:56:40 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:33:02.172 13:56:40 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@268 -- # timing_exit lib 00:33:02.172 13:56:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:02.172 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:33:02.172 13:56:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:02.172 13:56:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:02.172 13:56:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:02.172 13:56:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:02.172 13:56:40 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:33:02.172 13:56:40 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:02.172 13:56:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:02.172 13:56:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:02.172 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:33:02.172 ************************************ 00:33:02.172 START TEST blockdev_raid5f 00:33:02.172 ************************************ 00:33:02.172 13:56:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:02.172 * Looking for test storage... 00:33:02.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:02.172 13:56:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:02.172 13:56:41 -- bdev/nbd_common.sh@6 -- # set -e 00:33:02.172 13:56:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:02.172 13:56:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:02.172 13:56:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:02.172 13:56:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:02.172 13:56:41 -- bdev/blockdev.sh@18 -- # : 00:33:02.172 13:56:41 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:33:02.172 13:56:41 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:33:02.172 13:56:41 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:33:02.172 13:56:41 -- bdev/blockdev.sh@672 -- # uname -s 00:33:02.172 13:56:41 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:33:02.172 13:56:41 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:33:02.172 13:56:41 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:33:02.172 13:56:41 -- bdev/blockdev.sh@681 -- # crypto_device= 00:33:02.172 13:56:41 -- bdev/blockdev.sh@682 -- # dek= 00:33:02.172 13:56:41 -- bdev/blockdev.sh@683 -- # env_ctx= 00:33:02.172 13:56:41 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:33:02.172 13:56:41 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:33:02.172 13:56:41 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:33:02.172 13:56:41 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:33:02.172 13:56:41 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:33:02.172 13:56:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146264 00:33:02.172 13:56:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:02.172 13:56:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:02.172 13:56:41 -- bdev/blockdev.sh@47 -- # waitforlisten 146264 00:33:02.172 13:56:41 -- common/autotest_common.sh@819 -- # '[' -z 146264 ']' 00:33:02.172 13:56:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.172 13:56:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:02.172 13:56:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.172 13:56:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:02.172 13:56:41 -- common/autotest_common.sh@10 -- # set +x 00:33:02.172 [2024-07-10 13:56:41.136345] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:02.172 [2024-07-10 13:56:41.136581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146264 ] 00:33:02.172 [2024-07-10 13:56:41.300348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.429 [2024-07-10 13:56:41.530374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:02.429 [2024-07-10 13:56:41.530674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.361 13:56:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:03.361 13:56:42 -- common/autotest_common.sh@852 -- # return 0 00:33:03.361 13:56:42 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:33:03.361 13:56:42 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:33:03.361 13:56:42 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:33:03.361 13:56:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.361 13:56:42 -- common/autotest_common.sh@10 -- # set +x 00:33:03.619 Malloc0 00:33:03.619 Malloc1 00:33:03.619 Malloc2 00:33:03.619 13:56:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.619 13:56:42 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:33:03.619 13:56:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.619 13:56:42 -- common/autotest_common.sh@10 -- # set +x 00:33:03.619 13:56:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.619 13:56:42 -- bdev/blockdev.sh@738 -- # cat 00:33:03.619 13:56:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:33:03.619 13:56:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.619 13:56:42 -- common/autotest_common.sh@10 -- # set +x 00:33:03.619 13:56:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.619 13:56:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:33:03.619 13:56:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.619 13:56:42 -- common/autotest_common.sh@10 -- # set +x 00:33:03.619 13:56:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.619 13:56:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:03.619 13:56:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.619 13:56:42 -- common/autotest_common.sh@10 -- # set +x 00:33:03.619 13:56:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.619 13:56:42 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:33:03.619 13:56:42 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:33:03.619 13:56:42 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:33:03.619 13:56:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:03.619 13:56:42 -- common/autotest_common.sh@10 -- # set +x 00:33:03.619 13:56:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:03.619 13:56:42 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:33:03.619 13:56:42 -- bdev/blockdev.sh@747 -- # jq -r .name 00:33:03.619 13:56:42 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4cd2a8d9-e03e-42ac-9c64-e9a8723a569f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4cd2a8d9-e03e-42ac-9c64-e9a8723a569f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4cd2a8d9-e03e-42ac-9c64-e9a8723a569f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c246e229-0482-4ace-9ef5-47c45d6db8e7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "78b5b9a5-de07-40f6-936a-c2cb0d2e5bf5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8593da6a-8a27-422f-83ab-c460239c3b9c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:03.875 13:56:42 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:33:03.875 13:56:42 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:33:03.875 13:56:42 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:33:03.875 13:56:42 -- bdev/blockdev.sh@752 -- # killprocess 146264 00:33:03.875 13:56:42 -- common/autotest_common.sh@926 -- # '[' -z 146264 ']' 00:33:03.875 13:56:42 -- common/autotest_common.sh@930 -- # kill -0 146264 00:33:03.875 13:56:42 -- common/autotest_common.sh@931 -- # uname 00:33:03.875 13:56:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:03.875 13:56:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146264 00:33:03.875 13:56:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:03.875 13:56:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:03.875 13:56:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146264' 00:33:03.875 killing process with pid 146264 00:33:03.875 13:56:43 -- common/autotest_common.sh@945 -- # kill 146264 00:33:03.875 13:56:43 -- common/autotest_common.sh@950 -- # wait 146264 00:33:07.160 13:56:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:07.160 13:56:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:07.160 13:56:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:33:07.160 13:56:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.160 13:56:45 -- common/autotest_common.sh@10 -- # set +x 00:33:07.160 ************************************ 00:33:07.160 START TEST bdev_hello_world 00:33:07.160 ************************************ 00:33:07.160 13:56:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:07.160 [2024-07-10 13:56:45.959987] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:07.160 [2024-07-10 13:56:45.960200] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146349 ] 00:33:07.160 [2024-07-10 13:56:46.119277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.160 [2024-07-10 13:56:46.336542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.727 [2024-07-10 13:56:46.949518] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:07.727 [2024-07-10 13:56:46.949665] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:33:07.727 [2024-07-10 13:56:46.949707] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:07.727 [2024-07-10 13:56:46.950274] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:07.727 [2024-07-10 13:56:46.950461] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:07.727 [2024-07-10 13:56:46.950515] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:07.727 [2024-07-10 13:56:46.950601] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:07.727 00:33:07.727 [2024-07-10 13:56:46.950664] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:09.634 ************************************ 00:33:09.634 END TEST bdev_hello_world 00:33:09.634 ************************************ 00:33:09.634 00:33:09.634 real 0m2.746s 00:33:09.634 user 0m2.419s 00:33:09.634 sys 0m0.208s 00:33:09.634 13:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:09.634 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:33:09.634 13:56:48 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:33:09.634 13:56:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:09.634 13:56:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:09.634 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:33:09.634 ************************************ 00:33:09.634 START TEST bdev_bounds 00:33:09.634 ************************************ 00:33:09.634 13:56:48 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:33:09.634 13:56:48 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:09.634 13:56:48 -- bdev/blockdev.sh@288 -- # bdevio_pid=146419 00:33:09.634 13:56:48 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:09.634 13:56:48 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 146419' 00:33:09.634 Process bdevio pid: 146419 00:33:09.634 13:56:48 -- bdev/blockdev.sh@291 -- # waitforlisten 146419 00:33:09.634 13:56:48 -- common/autotest_common.sh@819 -- # '[' -z 146419 ']' 00:33:09.634 13:56:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.634 13:56:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:09.634 13:56:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.634 13:56:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:09.634 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:33:09.634 [2024-07-10 13:56:48.754452] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:09.634 [2024-07-10 13:56:48.754656] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146419 ] 00:33:09.634 [2024-07-10 13:56:48.920754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:09.894 [2024-07-10 13:56:49.142880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.894 [2024-07-10 13:56:49.143065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.894 [2024-07-10 13:56:49.143068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.272 13:56:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:11.272 13:56:50 -- common/autotest_common.sh@852 -- # return 0 00:33:11.272 13:56:50 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:11.272 I/O targets: 00:33:11.272 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:33:11.272 00:33:11.272 00:33:11.272 CUnit - A unit testing framework for C - Version 2.1-3 00:33:11.272 http://cunit.sourceforge.net/ 00:33:11.272 00:33:11.272 00:33:11.272 Suite: bdevio tests on: raid5f 00:33:11.272 Test: blockdev write read block ...passed 00:33:11.272 Test: blockdev write zeroes read block ...passed 00:33:11.272 Test: blockdev write zeroes read no split ...passed 00:33:11.272 Test: blockdev write zeroes read split ...passed 00:33:11.531 Test: blockdev write zeroes read split partial ...passed 00:33:11.531 Test: blockdev reset ...passed 00:33:11.531 Test: blockdev write read 8 blocks ...passed 00:33:11.531 Test: blockdev write read size > 128k ...passed 00:33:11.531 Test: blockdev write read invalid size ...passed 00:33:11.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:11.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:11.531 Test: blockdev write read max offset ...passed 00:33:11.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:11.531 Test: blockdev writev readv 8 blocks ...passed 00:33:11.531 Test: blockdev writev readv 30 x 1block ...passed 00:33:11.531 Test: blockdev writev readv block ...passed 00:33:11.531 Test: blockdev writev readv size > 128k ...passed 00:33:11.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:11.531 Test: blockdev comparev and writev ...passed 00:33:11.531 Test: blockdev nvme passthru rw ...passed 00:33:11.531 Test: blockdev nvme passthru vendor specific ...passed 00:33:11.531 Test: blockdev nvme admin passthru ...passed 00:33:11.531 Test: blockdev copy ...passed 00:33:11.531 00:33:11.531 Run Summary: Type Total Ran Passed Failed Inactive 00:33:11.531 suites 1 1 n/a 0 0 00:33:11.531 tests 23 23 23 0 0 00:33:11.531 asserts 130 130 130 0 n/a 00:33:11.531 00:33:11.531 Elapsed time = 0.646 seconds 00:33:11.531 0 00:33:11.531 13:56:50 -- bdev/blockdev.sh@293 -- # killprocess 146419 00:33:11.531 13:56:50 -- common/autotest_common.sh@926 -- # '[' -z 146419 ']' 00:33:11.531 13:56:50 -- common/autotest_common.sh@930 -- # kill -0 146419 00:33:11.531 13:56:50 -- common/autotest_common.sh@931 -- # uname 00:33:11.531 13:56:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:11.531 13:56:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146419 00:33:11.531 killing process with pid 146419 00:33:11.531 13:56:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:11.531 13:56:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:11.531 13:56:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146419' 00:33:11.531 13:56:50 -- common/autotest_common.sh@945 -- # kill 146419 00:33:11.531 13:56:50 -- common/autotest_common.sh@950 -- # wait 146419 00:33:13.434 ************************************ 00:33:13.434 END TEST bdev_bounds 00:33:13.434 ************************************ 00:33:13.434 13:56:52 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:33:13.434 00:33:13.434 real 0m3.790s 00:33:13.434 user 0m9.543s 00:33:13.434 sys 0m0.413s 00:33:13.434 13:56:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.434 13:56:52 -- common/autotest_common.sh@10 -- # set +x 00:33:13.434 13:56:52 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:13.434 13:56:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:33:13.434 13:56:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:13.434 13:56:52 -- common/autotest_common.sh@10 -- # set +x 00:33:13.434 ************************************ 00:33:13.434 START TEST bdev_nbd 00:33:13.434 ************************************ 00:33:13.434 13:56:52 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:13.434 13:56:52 -- bdev/blockdev.sh@298 -- # uname -s 00:33:13.434 13:56:52 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:33:13.434 13:56:52 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:13.434 13:56:52 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:13.434 13:56:52 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:33:13.435 13:56:52 -- bdev/blockdev.sh@302 -- # local bdev_all 00:33:13.435 13:56:52 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:33:13.435 13:56:52 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:33:13.435 13:56:52 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:33:13.435 13:56:52 -- bdev/blockdev.sh@309 -- # local nbd_all 00:33:13.435 13:56:52 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:33:13.435 13:56:52 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:33:13.435 13:56:52 -- bdev/blockdev.sh@312 -- # local nbd_list 00:33:13.435 13:56:52 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:33:13.435 13:56:52 -- bdev/blockdev.sh@313 -- # local bdev_list 00:33:13.435 13:56:52 -- bdev/blockdev.sh@316 -- # nbd_pid=146495 00:33:13.435 13:56:52 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:13.435 13:56:52 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:13.435 13:56:52 -- bdev/blockdev.sh@318 -- # waitforlisten 146495 /var/tmp/spdk-nbd.sock 00:33:13.435 13:56:52 -- common/autotest_common.sh@819 -- # '[' -z 146495 ']' 00:33:13.435 13:56:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:13.435 13:56:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:13.435 13:56:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:13.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:13.435 13:56:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:13.435 13:56:52 -- common/autotest_common.sh@10 -- # set +x 00:33:13.435 [2024-07-10 13:56:52.638623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:13.435 [2024-07-10 13:56:52.638868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.699 [2024-07-10 13:56:52.801210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.699 [2024-07-10 13:56:53.003552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.081 13:56:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:15.081 13:56:54 -- common/autotest_common.sh@852 -- # return 0 00:33:15.081 13:56:54 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@24 -- # local i 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:15.081 13:56:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:33:15.082 13:56:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:15.082 13:56:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:15.082 13:56:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:15.082 13:56:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:33:15.082 13:56:54 -- common/autotest_common.sh@857 -- # local i 00:33:15.082 13:56:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:33:15.082 13:56:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:33:15.082 13:56:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:33:15.082 13:56:54 -- common/autotest_common.sh@861 -- # break 00:33:15.082 13:56:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:33:15.082 13:56:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:33:15.082 13:56:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:15.082 1+0 records in 00:33:15.082 1+0 records out 00:33:15.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421889 s, 9.7 MB/s 00:33:15.341 13:56:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:15.341 13:56:54 -- common/autotest_common.sh@874 -- # size=4096 00:33:15.341 13:56:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:15.341 13:56:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:33:15.341 13:56:54 -- common/autotest_common.sh@877 -- # return 0 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:15.341 { 00:33:15.341 "nbd_device": "/dev/nbd0", 00:33:15.341 "bdev_name": "raid5f" 00:33:15.341 } 00:33:15.341 ]' 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:15.341 { 00:33:15.341 "nbd_device": "/dev/nbd0", 00:33:15.341 "bdev_name": "raid5f" 00:33:15.341 } 00:33:15.341 ]' 00:33:15.341 13:56:54 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@51 -- # local i 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:15.599 13:56:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@41 -- # break 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@45 -- # return 0 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:15.600 13:56:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@65 -- # true 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@65 -- # count=0 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@122 -- # count=0 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@127 -- # return 0 00:33:15.859 13:56:55 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@12 -- # local i 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:15.859 13:56:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:33:16.117 /dev/nbd0 00:33:16.117 13:56:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:16.117 13:56:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:16.117 13:56:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:33:16.117 13:56:55 -- common/autotest_common.sh@857 -- # local i 00:33:16.117 13:56:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:33:16.117 13:56:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:33:16.117 13:56:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:33:16.117 13:56:55 -- common/autotest_common.sh@861 -- # break 00:33:16.117 13:56:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:33:16.117 13:56:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:33:16.118 13:56:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:16.118 1+0 records in 00:33:16.118 1+0 records out 00:33:16.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457419 s, 9.0 MB/s 00:33:16.118 13:56:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:16.118 13:56:55 -- common/autotest_common.sh@874 -- # size=4096 00:33:16.118 13:56:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:16.118 13:56:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:33:16.118 13:56:55 -- common/autotest_common.sh@877 -- # return 0 00:33:16.118 13:56:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:16.118 13:56:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:16.118 13:56:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:16.118 13:56:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:16.118 13:56:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:16.376 13:56:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:16.376 { 00:33:16.376 "nbd_device": "/dev/nbd0", 00:33:16.377 "bdev_name": "raid5f" 00:33:16.377 } 00:33:16.377 ]' 00:33:16.377 13:56:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:16.377 { 00:33:16.377 "nbd_device": "/dev/nbd0", 00:33:16.377 "bdev_name": "raid5f" 00:33:16.377 } 00:33:16.377 ]' 00:33:16.377 13:56:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@65 -- # count=1 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@66 -- # echo 1 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@95 -- # count=1 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:16.636 256+0 records in 00:33:16.636 256+0 records out 00:33:16.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122737 s, 85.4 MB/s 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:16.636 256+0 records in 00:33:16.636 256+0 records out 00:33:16.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313532 s, 33.4 MB/s 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:16.636 13:56:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@51 -- # local i 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:16.637 13:56:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:16.896 13:56:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:16.896 13:56:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:16.896 13:56:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:16.896 13:56:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:16.896 13:56:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:16.896 13:56:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@41 -- # break 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@45 -- # return 0 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:16.896 13:56:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@65 -- # true 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@65 -- # count=0 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@104 -- # count=0 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@109 -- # return 0 00:33:17.155 13:56:56 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:17.155 13:56:56 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:17.155 malloc_lvol_verify 00:33:17.414 13:56:56 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:17.414 51a5e528-e07b-4f2f-b128-0a7af28ba0bd 00:33:17.414 13:56:56 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:17.673 be8b9ef5-4972-46eb-8575-984e82be26a7 00:33:17.673 13:56:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:17.931 /dev/nbd0 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:17.931 mke2fs 1.45.5 (07-Jan-2020) 00:33:17.931 00:33:17.931 Filesystem too small for a journal 00:33:17.931 Creating filesystem with 1024 4k blocks and 1024 inodes 00:33:17.931 00:33:17.931 Allocating group tables: 0/1 done 00:33:17.931 Writing inode tables: 0/1 done 00:33:17.931 Writing superblocks and filesystem accounting information: 0/1 done 00:33:17.931 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@51 -- # local i 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:17.931 13:56:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@41 -- # break 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@45 -- # return 0 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:18.189 13:56:57 -- bdev/nbd_common.sh@147 -- # return 0 00:33:18.189 13:56:57 -- bdev/blockdev.sh@324 -- # killprocess 146495 00:33:18.189 13:56:57 -- common/autotest_common.sh@926 -- # '[' -z 146495 ']' 00:33:18.189 13:56:57 -- common/autotest_common.sh@930 -- # kill -0 146495 00:33:18.189 13:56:57 -- common/autotest_common.sh@931 -- # uname 00:33:18.189 13:56:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:18.189 13:56:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146495 00:33:18.189 killing process with pid 146495 00:33:18.189 13:56:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:18.189 13:56:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:18.189 13:56:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146495' 00:33:18.189 13:56:57 -- common/autotest_common.sh@945 -- # kill 146495 00:33:18.189 13:56:57 -- common/autotest_common.sh@950 -- # wait 146495 00:33:20.095 ************************************ 00:33:20.095 END TEST bdev_nbd 00:33:20.095 ************************************ 00:33:20.095 13:56:59 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:33:20.095 00:33:20.095 real 0m6.687s 00:33:20.095 user 0m8.898s 00:33:20.095 sys 0m1.227s 00:33:20.095 13:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:20.095 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:33:20.095 13:56:59 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:33:20.095 13:56:59 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:33:20.095 13:56:59 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:33:20.095 13:56:59 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:33:20.095 13:56:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:20.095 13:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:20.095 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:33:20.096 ************************************ 00:33:20.096 START TEST bdev_fio 00:33:20.096 ************************************ 00:33:20.096 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:33:20.096 13:56:59 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:33:20.096 13:56:59 -- bdev/blockdev.sh@329 -- # local env_context 00:33:20.096 13:56:59 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:33:20.096 13:56:59 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:20.096 13:56:59 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:33:20.096 13:56:59 -- bdev/blockdev.sh@337 -- # echo '' 00:33:20.096 13:56:59 -- bdev/blockdev.sh@337 -- # env_context= 00:33:20.096 13:56:59 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:20.096 13:56:59 -- common/autotest_common.sh@1260 -- # local workload=verify 00:33:20.096 13:56:59 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:33:20.096 13:56:59 -- common/autotest_common.sh@1262 -- # local env_context= 00:33:20.096 13:56:59 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:33:20.096 13:56:59 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:20.096 13:56:59 -- common/autotest_common.sh@1280 -- # cat 00:33:20.096 13:56:59 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1293 -- # cat 00:33:20.096 13:56:59 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:33:20.096 13:56:59 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:20.096 13:56:59 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:33:20.096 13:56:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:33:20.096 13:56:59 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:33:20.096 13:56:59 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:33:20.096 13:56:59 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:33:20.096 13:56:59 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:20.096 13:56:59 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:20.096 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:33:20.096 ************************************ 00:33:20.096 START TEST bdev_fio_rw_verify 00:33:20.096 ************************************ 00:33:20.096 13:56:59 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:20.096 13:56:59 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:20.096 13:56:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:20.096 13:56:59 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:33:20.096 13:56:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:20.096 13:56:59 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:20.096 13:56:59 -- common/autotest_common.sh@1320 -- # shift 00:33:20.096 13:56:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:20.096 13:56:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.096 13:56:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:20.096 13:56:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:20.096 13:56:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:33:20.096 13:56:59 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:33:20.096 13:56:59 -- common/autotest_common.sh@1326 -- # break 00:33:20.096 13:56:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:20.096 13:56:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:20.361 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:20.361 fio-3.35 00:33:20.361 Starting 1 thread 00:33:32.572 00:33:32.572 job_raid5f: (groupid=0, jobs=1): err= 0: pid=146766: Wed Jul 10 13:57:10 2024 00:33:32.572 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(412MiB/10001msec) 00:33:32.572 slat (usec): min=16, max=1360, avg=21.13, stdev= 5.92 00:33:32.572 clat (usec): min=11, max=1603, avg=139.21, stdev=54.22 00:33:32.572 lat (usec): min=31, max=1626, avg=160.33, stdev=55.75 00:33:32.572 clat percentiles (usec): 00:33:32.572 | 50.000th=[ 137], 99.000th=[ 245], 99.900th=[ 408], 99.990th=[ 865], 00:33:32.572 | 99.999th=[ 1012] 00:33:32.573 write: IOPS=11.0k, BW=43.1MiB/s (45.1MB/s)(426MiB/9889msec); 0 zone resets 00:33:32.573 slat (usec): min=8, max=236, avg=21.28, stdev= 5.41 00:33:32.573 clat (usec): min=63, max=1808, avg=360.77, stdev=82.56 00:33:32.573 lat (usec): min=82, max=1858, avg=382.05, stdev=85.54 00:33:32.573 clat percentiles (usec): 00:33:32.573 | 50.000th=[ 359], 99.000th=[ 515], 99.900th=[ 1467], 99.990th=[ 1729], 00:33:32.573 | 99.999th=[ 1795] 00:33:32.573 bw ( KiB/s): min=36784, max=53080, per=98.74%, avg=43536.84, stdev=4315.02, samples=19 00:33:32.573 iops : min= 9196, max=13270, avg=10884.21, stdev=1078.75, samples=19 00:33:32.573 lat (usec) : 20=0.01%, 50=0.01%, 100=14.67%, 250=36.08%, 500=48.59% 00:33:32.573 lat (usec) : 750=0.50%, 1000=0.03% 00:33:32.573 lat (msec) : 2=0.13% 00:33:32.573 cpu : usr=99.36%, sys=0.56%, ctx=166, majf=0, minf=7484 00:33:32.573 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:32.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.573 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.573 issued rwts: total=105484,109000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.573 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:32.573 00:33:32.573 Run status group 0 (all jobs): 00:33:32.573 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=412MiB (432MB), run=10001-10001msec 00:33:32.573 WRITE: bw=43.1MiB/s (45.1MB/s), 43.1MiB/s-43.1MiB/s (45.1MB/s-45.1MB/s), io=426MiB (446MB), run=9889-9889msec 00:33:32.831 ----------------------------------------------------- 00:33:32.831 Suppressions used: 00:33:32.831 count bytes template 00:33:32.831 1 7 /usr/src/fio/parse.c 00:33:32.831 241 23136 /usr/src/fio/iolog.c 00:33:32.831 2 596 libcrypto.so 00:33:32.831 ----------------------------------------------------- 00:33:32.831 00:33:33.091 ************************************ 00:33:33.091 END TEST bdev_fio_rw_verify 00:33:33.091 ************************************ 00:33:33.091 00:33:33.091 real 0m12.838s 00:33:33.091 user 0m13.242s 00:33:33.091 sys 0m0.564s 00:33:33.091 13:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.091 13:57:12 -- common/autotest_common.sh@10 -- # set +x 00:33:33.091 13:57:12 -- bdev/blockdev.sh@348 -- # rm -f 00:33:33.091 13:57:12 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:33.091 13:57:12 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:33.091 13:57:12 -- common/autotest_common.sh@1260 -- # local workload=trim 00:33:33.091 13:57:12 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:33:33.091 13:57:12 -- common/autotest_common.sh@1262 -- # local env_context= 00:33:33.091 13:57:12 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:33:33.091 13:57:12 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:33.091 13:57:12 -- common/autotest_common.sh@1280 -- # cat 00:33:33.091 13:57:12 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:33:33.091 13:57:12 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:33.091 13:57:12 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4cd2a8d9-e03e-42ac-9c64-e9a8723a569f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4cd2a8d9-e03e-42ac-9c64-e9a8723a569f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4cd2a8d9-e03e-42ac-9c64-e9a8723a569f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c246e229-0482-4ace-9ef5-47c45d6db8e7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "78b5b9a5-de07-40f6-936a-c2cb0d2e5bf5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "8593da6a-8a27-422f-83ab-c460239c3b9c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:33.091 13:57:12 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:33:33.091 13:57:12 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:33.091 /home/vagrant/spdk_repo/spdk 00:33:33.091 ************************************ 00:33:33.091 END TEST bdev_fio 00:33:33.091 ************************************ 00:33:33.091 13:57:12 -- bdev/blockdev.sh@360 -- # popd 00:33:33.091 13:57:12 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:33:33.091 13:57:12 -- bdev/blockdev.sh@362 -- # return 0 00:33:33.091 00:33:33.091 real 0m13.011s 00:33:33.091 user 0m13.350s 00:33:33.091 sys 0m0.635s 00:33:33.091 13:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.091 13:57:12 -- common/autotest_common.sh@10 -- # set +x 00:33:33.091 13:57:12 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:33.091 13:57:12 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:33:33.091 13:57:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:33.091 13:57:12 -- common/autotest_common.sh@10 -- # set +x 00:33:33.091 ************************************ 00:33:33.091 START TEST bdev_verify 00:33:33.091 ************************************ 00:33:33.091 13:57:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:33.091 [2024-07-10 13:57:12.412864] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:33.091 [2024-07-10 13:57:12.413100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146954 ] 00:33:33.350 [2024-07-10 13:57:12.576893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:33.609 [2024-07-10 13:57:12.777151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.609 [2024-07-10 13:57:12.777157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.176 Running I/O for 5 seconds... 00:33:39.450 00:33:39.450 Latency(us) 00:33:39.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.450 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:39.450 Verification LBA range: start 0x0 length 0x2000 00:33:39.450 raid5f : 5.01 10728.07 41.91 0.00 0.00 18904.86 132.36 15911.80 00:33:39.450 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:39.450 Verification LBA range: start 0x2000 length 0x2000 00:33:39.450 raid5f : 5.01 10797.93 42.18 0.00 0.00 18782.00 250.41 15453.90 00:33:39.450 =================================================================================================================== 00:33:39.450 Total : 21526.00 84.09 0.00 0.00 18843.24 132.36 15911.80 00:33:40.821 ************************************ 00:33:40.821 END TEST bdev_verify 00:33:40.821 ************************************ 00:33:40.821 00:33:40.821 real 0m7.502s 00:33:40.821 user 0m13.796s 00:33:40.821 sys 0m0.226s 00:33:40.821 13:57:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.821 13:57:19 -- common/autotest_common.sh@10 -- # set +x 00:33:40.822 13:57:19 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:40.822 13:57:19 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:33:40.822 13:57:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:40.822 13:57:19 -- common/autotest_common.sh@10 -- # set +x 00:33:40.822 ************************************ 00:33:40.822 START TEST bdev_verify_big_io 00:33:40.822 ************************************ 00:33:40.822 13:57:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:40.822 [2024-07-10 13:57:19.979877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:40.822 [2024-07-10 13:57:19.980509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147075 ] 00:33:40.822 [2024-07-10 13:57:20.143246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:41.080 [2024-07-10 13:57:20.340408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.080 [2024-07-10 13:57:20.340411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.647 Running I/O for 5 seconds... 00:33:46.909 00:33:46.909 Latency(us) 00:33:46.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.909 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:46.909 Verification LBA range: start 0x0 length 0x200 00:33:46.909 raid5f : 5.14 758.92 47.43 0.00 0.00 4400625.94 120.73 146525.90 00:33:46.909 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:46.909 Verification LBA range: start 0x200 length 0x200 00:33:46.909 raid5f : 5.14 760.72 47.55 0.00 0.00 4389031.40 153.82 147441.69 00:33:46.909 =================================================================================================================== 00:33:46.909 Total : 1519.64 94.98 0.00 0.00 4394821.99 120.73 147441.69 00:33:48.282 ************************************ 00:33:48.282 END TEST bdev_verify_big_io 00:33:48.282 ************************************ 00:33:48.282 00:33:48.282 real 0m7.630s 00:33:48.282 user 0m14.008s 00:33:48.282 sys 0m0.281s 00:33:48.282 13:57:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.282 13:57:27 -- common/autotest_common.sh@10 -- # set +x 00:33:48.283 13:57:27 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.283 13:57:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:48.283 13:57:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:48.283 13:57:27 -- common/autotest_common.sh@10 -- # set +x 00:33:48.283 ************************************ 00:33:48.283 START TEST bdev_write_zeroes 00:33:48.283 ************************************ 00:33:48.283 13:57:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:48.540 [2024-07-10 13:57:27.669034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:48.540 [2024-07-10 13:57:27.669744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147188 ] 00:33:48.540 [2024-07-10 13:57:27.829495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.798 [2024-07-10 13:57:28.020416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.363 Running I/O for 1 seconds... 00:33:50.297 00:33:50.297 Latency(us) 00:33:50.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.297 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:50.297 raid5f : 1.01 25046.99 97.84 0.00 0.00 5092.29 1438.07 7068.73 00:33:50.297 =================================================================================================================== 00:33:50.297 Total : 25046.99 97.84 0.00 0.00 5092.29 1438.07 7068.73 00:33:52.199 ************************************ 00:33:52.199 END TEST bdev_write_zeroes 00:33:52.199 ************************************ 00:33:52.199 00:33:52.199 real 0m3.500s 00:33:52.199 user 0m3.164s 00:33:52.199 sys 0m0.221s 00:33:52.199 13:57:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.199 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:33:52.199 13:57:31 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:52.199 13:57:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:52.199 13:57:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:52.199 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:33:52.199 ************************************ 00:33:52.199 START TEST bdev_json_nonenclosed 00:33:52.199 ************************************ 00:33:52.199 13:57:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:52.199 [2024-07-10 13:57:31.236018] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:52.199 [2024-07-10 13:57:31.236278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147262 ] 00:33:52.199 [2024-07-10 13:57:31.395875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.458 [2024-07-10 13:57:31.589128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.458 [2024-07-10 13:57:31.589336] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:52.458 [2024-07-10 13:57:31.589436] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:52.719 ************************************ 00:33:52.719 END TEST bdev_json_nonenclosed 00:33:52.719 ************************************ 00:33:52.719 00:33:52.719 real 0m0.834s 00:33:52.719 user 0m0.609s 00:33:52.719 sys 0m0.125s 00:33:52.719 13:57:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.719 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:33:52.719 13:57:32 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:52.719 13:57:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:52.719 13:57:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:52.719 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:33:52.978 ************************************ 00:33:52.978 START TEST bdev_json_nonarray 00:33:52.978 ************************************ 00:33:52.978 13:57:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:52.978 [2024-07-10 13:57:32.123924] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:52.978 [2024-07-10 13:57:32.124599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147298 ] 00:33:52.978 [2024-07-10 13:57:32.284761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.236 [2024-07-10 13:57:32.472813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.236 [2024-07-10 13:57:32.473058] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:53.236 [2024-07-10 13:57:32.473119] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:53.805 ************************************ 00:33:53.805 END TEST bdev_json_nonarray 00:33:53.805 ************************************ 00:33:53.805 00:33:53.805 real 0m0.834s 00:33:53.805 user 0m0.603s 00:33:53.805 sys 0m0.128s 00:33:53.805 13:57:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.805 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:33:53.805 13:57:32 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:33:53.805 13:57:32 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:33:53.805 13:57:32 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:33:53.805 13:57:32 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:33:53.805 13:57:32 -- bdev/blockdev.sh@809 -- # cleanup 00:33:53.805 13:57:32 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:53.805 13:57:32 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:53.805 13:57:32 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:33:53.805 13:57:32 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:33:53.805 13:57:32 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:33:53.805 13:57:32 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:33:53.805 00:33:53.805 real 0m52.021s 00:33:53.805 user 1m11.744s 00:33:53.805 sys 0m4.312s 00:33:53.805 13:57:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.805 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:33:53.805 ************************************ 00:33:53.805 END TEST blockdev_raid5f 00:33:53.805 ************************************ 00:33:53.805 13:57:32 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:33:53.805 13:57:32 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:33:53.805 13:57:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:53.805 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:33:53.805 13:57:33 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:33:53.805 13:57:33 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:33:53.805 13:57:33 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:33:53.805 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:33:55.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:55.711 Waiting for block devices as requested 00:33:55.711 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:55.969 Cleaning 00:33:55.969 Removing: /var/run/dpdk/spdk0/config 00:33:56.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:56.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:56.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:56.228 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:56.228 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:56.228 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:56.228 Removing: /dev/shm/spdk_tgt_trace.pid104979 00:33:56.228 Removing: /var/run/dpdk/spdk0 00:33:56.228 Removing: /var/run/dpdk/spdk_pid104711 00:33:56.228 Removing: /var/run/dpdk/spdk_pid104979 00:33:56.228 Removing: /var/run/dpdk/spdk_pid105271 00:33:56.228 Removing: /var/run/dpdk/spdk_pid105542 00:33:56.228 Removing: /var/run/dpdk/spdk_pid105728 00:33:56.228 Removing: /var/run/dpdk/spdk_pid105854 00:33:56.228 Removing: /var/run/dpdk/spdk_pid105965 00:33:56.228 Removing: /var/run/dpdk/spdk_pid106107 00:33:56.228 Removing: /var/run/dpdk/spdk_pid106224 00:33:56.228 Removing: /var/run/dpdk/spdk_pid106274 00:33:56.228 Removing: /var/run/dpdk/spdk_pid106325 00:33:56.228 Removing: /var/run/dpdk/spdk_pid106411 00:33:56.228 Removing: /var/run/dpdk/spdk_pid106558 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107111 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107214 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107293 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107321 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107484 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107519 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107679 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107710 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107793 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107823 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107892 00:33:56.228 Removing: /var/run/dpdk/spdk_pid107924 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108135 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108185 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108233 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108342 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108445 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108488 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108594 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108645 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108704 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108739 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108799 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108857 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108918 00:33:56.228 Removing: /var/run/dpdk/spdk_pid108950 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109025 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109065 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109124 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109170 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109247 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109293 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109347 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109410 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109457 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109503 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109557 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109618 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109672 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109719 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109789 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109832 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109886 00:33:56.228 Removing: /var/run/dpdk/spdk_pid109927 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110003 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110050 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110103 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110167 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110226 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110271 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110326 00:33:56.228 Removing: /var/run/dpdk/spdk_pid110395 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110457 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110501 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110581 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110620 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110674 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110720 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110798 00:33:56.487 Removing: /var/run/dpdk/spdk_pid110895 00:33:56.487 Removing: /var/run/dpdk/spdk_pid111027 00:33:56.487 Removing: /var/run/dpdk/spdk_pid111244 00:33:56.487 Removing: /var/run/dpdk/spdk_pid111362 00:33:56.487 Removing: /var/run/dpdk/spdk_pid111436 00:33:56.487 Removing: /var/run/dpdk/spdk_pid112790 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113034 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113274 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113426 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113598 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113699 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113722 00:33:56.487 Removing: /var/run/dpdk/spdk_pid113760 00:33:56.487 Removing: /var/run/dpdk/spdk_pid114287 00:33:56.487 Removing: /var/run/dpdk/spdk_pid114387 00:33:56.487 Removing: /var/run/dpdk/spdk_pid114524 00:33:56.487 Removing: /var/run/dpdk/spdk_pid114587 00:33:56.487 Removing: /var/run/dpdk/spdk_pid115837 00:33:56.487 Removing: /var/run/dpdk/spdk_pid116759 00:33:56.487 Removing: /var/run/dpdk/spdk_pid117664 00:33:56.487 Removing: /var/run/dpdk/spdk_pid118808 00:33:56.487 Removing: /var/run/dpdk/spdk_pid119915 00:33:56.487 Removing: /var/run/dpdk/spdk_pid121025 00:33:56.487 Removing: /var/run/dpdk/spdk_pid122557 00:33:56.487 Removing: /var/run/dpdk/spdk_pid123781 00:33:56.487 Removing: /var/run/dpdk/spdk_pid125011 00:33:56.487 Removing: /var/run/dpdk/spdk_pid125699 00:33:56.487 Removing: /var/run/dpdk/spdk_pid126239 00:33:56.487 Removing: /var/run/dpdk/spdk_pid126876 00:33:56.487 Removing: /var/run/dpdk/spdk_pid127377 00:33:56.487 Removing: /var/run/dpdk/spdk_pid127971 00:33:56.487 Removing: /var/run/dpdk/spdk_pid128573 00:33:56.487 Removing: /var/run/dpdk/spdk_pid129262 00:33:56.487 Removing: /var/run/dpdk/spdk_pid129821 00:33:56.487 Removing: /var/run/dpdk/spdk_pid131330 00:33:56.487 Removing: /var/run/dpdk/spdk_pid131978 00:33:56.487 Removing: /var/run/dpdk/spdk_pid132570 00:33:56.487 Removing: /var/run/dpdk/spdk_pid134155 00:33:56.487 Removing: /var/run/dpdk/spdk_pid134844 00:33:56.487 Removing: /var/run/dpdk/spdk_pid135501 00:33:56.487 Removing: /var/run/dpdk/spdk_pid136318 00:33:56.487 Removing: /var/run/dpdk/spdk_pid136376 00:33:56.487 Removing: /var/run/dpdk/spdk_pid136427 00:33:56.487 Removing: /var/run/dpdk/spdk_pid136516 00:33:56.487 Removing: /var/run/dpdk/spdk_pid136647 00:33:56.487 Removing: /var/run/dpdk/spdk_pid136813 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137022 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137329 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137344 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137404 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137438 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137484 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137516 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137549 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137587 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137619 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137661 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137694 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137733 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137765 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137822 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137861 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137893 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137926 00:33:56.487 Removing: /var/run/dpdk/spdk_pid137978 00:33:56.487 Removing: /var/run/dpdk/spdk_pid138010 00:33:56.487 Removing: /var/run/dpdk/spdk_pid138042 00:33:56.487 Removing: /var/run/dpdk/spdk_pid138090 00:33:56.487 Removing: /var/run/dpdk/spdk_pid138126 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138198 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138279 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138332 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138364 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138415 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138456 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138483 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138554 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138585 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138650 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138683 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138707 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138736 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138765 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138810 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138841 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138870 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138920 00:33:56.745 Removing: /var/run/dpdk/spdk_pid138990 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139023 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139068 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139103 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139135 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139222 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139249 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139297 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139330 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139376 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139401 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139437 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139466 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139495 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139542 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139632 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139772 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139956 00:33:56.745 Removing: /var/run/dpdk/spdk_pid139991 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140055 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140120 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140180 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140214 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140255 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140304 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140361 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140448 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140520 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140577 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140868 00:33:56.745 Removing: /var/run/dpdk/spdk_pid140998 00:33:56.745 Removing: /var/run/dpdk/spdk_pid141051 00:33:56.745 Removing: /var/run/dpdk/spdk_pid141145 00:33:56.745 Removing: /var/run/dpdk/spdk_pid141259 00:33:56.745 Removing: /var/run/dpdk/spdk_pid141314 00:33:56.745 Removing: /var/run/dpdk/spdk_pid141610 00:33:56.745 Removing: /var/run/dpdk/spdk_pid141994 00:33:56.745 Removing: /var/run/dpdk/spdk_pid142119 00:33:56.745 Removing: /var/run/dpdk/spdk_pid142176 00:33:56.745 Removing: /var/run/dpdk/spdk_pid142214 00:33:56.745 Removing: /var/run/dpdk/spdk_pid142297 00:33:56.745 Removing: /var/run/dpdk/spdk_pid142870 00:33:56.745 Removing: /var/run/dpdk/spdk_pid142920 00:33:56.745 Removing: /var/run/dpdk/spdk_pid143284 00:33:56.745 Removing: /var/run/dpdk/spdk_pid143507 00:33:56.745 Removing: /var/run/dpdk/spdk_pid143644 00:33:56.745 Removing: /var/run/dpdk/spdk_pid143700 00:33:56.745 Removing: /var/run/dpdk/spdk_pid143738 00:33:56.745 Removing: /var/run/dpdk/spdk_pid143776 00:33:56.745 Removing: /var/run/dpdk/spdk_pid145217 00:33:56.745 Removing: /var/run/dpdk/spdk_pid145391 00:33:56.745 Removing: /var/run/dpdk/spdk_pid145396 00:33:56.745 Removing: /var/run/dpdk/spdk_pid145413 00:33:56.745 Removing: /var/run/dpdk/spdk_pid145966 00:33:56.745 Removing: /var/run/dpdk/spdk_pid146086 00:33:56.745 Removing: /var/run/dpdk/spdk_pid146264 00:33:56.745 Removing: /var/run/dpdk/spdk_pid146349 00:33:57.003 Removing: /var/run/dpdk/spdk_pid146419 00:33:57.003 Removing: /var/run/dpdk/spdk_pid146747 00:33:57.003 Removing: /var/run/dpdk/spdk_pid146954 00:33:57.003 Removing: /var/run/dpdk/spdk_pid147075 00:33:57.003 Removing: /var/run/dpdk/spdk_pid147188 00:33:57.003 Removing: /var/run/dpdk/spdk_pid147262 00:33:57.003 Removing: /var/run/dpdk/spdk_pid147298 00:33:57.003 Clean 00:33:57.003 killing process with pid 93960 00:33:57.003 killing process with pid 94044 00:33:57.003 13:57:36 -- common/autotest_common.sh@1436 -- # return 0 00:33:57.003 13:57:36 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:33:57.003 13:57:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:57.003 13:57:36 -- common/autotest_common.sh@10 -- # set +x 00:33:57.003 13:57:36 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:33:57.003 13:57:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:57.003 13:57:36 -- common/autotest_common.sh@10 -- # set +x 00:33:57.260 13:57:36 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:57.260 13:57:36 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:57.260 13:57:36 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:57.260 13:57:36 -- spdk/autotest.sh@394 -- # hash lcov 00:33:57.260 13:57:36 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:57.260 13:57:36 -- spdk/autotest.sh@396 -- # hostname 00:33:57.260 13:57:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:57.260 geninfo: WARNING: invalid characters removed from testname! 00:34:43.934 13:58:19 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:45.840 13:58:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:49.136 13:58:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:51.670 13:58:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:54.956 13:58:33 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:57.494 13:58:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:00.781 13:58:39 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:00.781 13:58:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:00.781 13:58:39 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:00.781 13:58:39 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.781 13:58:39 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.781 13:58:39 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:00.781 13:58:39 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:00.781 13:58:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:00.781 13:58:39 -- paths/export.sh@5 -- $ export PATH 00:35:00.781 13:58:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:00.781 13:58:39 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:00.781 13:58:39 -- common/autobuild_common.sh@435 -- $ date +%s 00:35:00.781 13:58:39 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720619919.XXXXXX 00:35:00.781 13:58:39 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720619919.w1pP79 00:35:00.781 13:58:39 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:35:00.781 13:58:39 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:35:00.781 13:58:39 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:00.781 13:58:39 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:00.781 13:58:39 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:00.781 13:58:39 -- common/autobuild_common.sh@451 -- $ get_config_params 00:35:00.781 13:58:39 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:35:00.781 13:58:39 -- common/autotest_common.sh@10 -- $ set +x 00:35:00.782 13:58:39 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:35:00.782 13:58:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:35:00.782 13:58:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:35:00.782 13:58:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:00.782 13:58:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:00.782 13:58:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:00.782 13:58:39 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:35:00.782 13:58:39 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:35:00.782 13:58:39 -- common/autotest_common.sh@10 -- $ set +x 00:35:00.782 13:58:39 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:35:00.782 13:58:39 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:35:00.782 13:58:39 -- spdk/autopackage.sh@40 -- $ get_config_params 00:35:00.782 13:58:39 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:35:00.782 13:58:39 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:35:00.782 13:58:39 -- common/autotest_common.sh@10 -- $ set +x 00:35:00.782 13:58:39 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:35:00.782 13:58:39 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:35:00.782 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:00.782 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:01.041 Using 'verbs' RDMA provider 00:35:16.492 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:35:28.709 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:35:28.709 Creating mk/config.mk...done. 00:35:28.709 Creating mk/cc.flags.mk...done. 00:35:28.709 Type 'make' to build. 00:35:28.709 13:59:06 -- spdk/autopackage.sh@43 -- $ make -j10 00:35:28.709 make[1]: Nothing to be done for 'all'. 00:35:32.903 The Meson build system 00:35:32.903 Version: 1.4.0 00:35:32.903 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:35:32.903 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:35:32.903 Build type: native build 00:35:32.903 Program cat found: YES (/usr/bin/cat) 00:35:32.903 Project name: DPDK 00:35:32.903 Project version: 23.11.0 00:35:32.903 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:35:32.903 C linker for the host machine: cc ld.bfd 2.34 00:35:32.903 Host machine cpu family: x86_64 00:35:32.903 Host machine cpu: x86_64 00:35:32.903 Message: ## Building in Developer Mode ## 00:35:32.903 Program pkg-config found: YES (/usr/bin/pkg-config) 00:35:32.903 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:35:32.903 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:35:32.903 Program python3 found: YES (/usr/bin/python3) 00:35:32.903 Program cat found: YES (/usr/bin/cat) 00:35:32.903 Compiler for C supports arguments -march=native: YES 00:35:32.903 Checking for size of "void *" : 8 00:35:32.903 Checking for size of "void *" : 8 (cached) 00:35:32.903 Library m found: YES 00:35:32.903 Library numa found: YES 00:35:32.903 Has header "numaif.h" : YES 00:35:32.903 Library fdt found: NO 00:35:32.903 Library execinfo found: NO 00:35:32.903 Has header "execinfo.h" : YES 00:35:32.903 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:35:32.903 Run-time dependency libarchive found: NO (tried pkgconfig) 00:35:32.903 Run-time dependency libbsd found: NO (tried pkgconfig) 00:35:32.903 Run-time dependency jansson found: NO (tried pkgconfig) 00:35:32.903 Run-time dependency openssl found: YES 1.1.1f 00:35:32.903 Run-time dependency libpcap found: NO (tried pkgconfig) 00:35:32.903 Library pcap found: NO 00:35:32.903 Compiler for C supports arguments -Wcast-qual: YES 00:35:32.903 Compiler for C supports arguments -Wdeprecated: YES 00:35:32.903 Compiler for C supports arguments -Wformat: YES 00:35:32.903 Compiler for C supports arguments -Wformat-nonliteral: YES 00:35:32.903 Compiler for C supports arguments -Wformat-security: YES 00:35:32.903 Compiler for C supports arguments -Wmissing-declarations: YES 00:35:32.903 Compiler for C supports arguments -Wmissing-prototypes: YES 00:35:32.903 Compiler for C supports arguments -Wnested-externs: YES 00:35:32.903 Compiler for C supports arguments -Wold-style-definition: YES 00:35:32.903 Compiler for C supports arguments -Wpointer-arith: YES 00:35:32.903 Compiler for C supports arguments -Wsign-compare: YES 00:35:32.903 Compiler for C supports arguments -Wstrict-prototypes: YES 00:35:32.903 Compiler for C supports arguments -Wundef: YES 00:35:32.903 Compiler for C supports arguments -Wwrite-strings: YES 00:35:32.903 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:35:32.903 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:35:32.903 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:35:32.903 Program objdump found: YES (/usr/bin/objdump) 00:35:32.903 Compiler for C supports arguments -mavx512f: YES 00:35:32.903 Checking if "AVX512 checking" compiles: YES 00:35:32.903 Fetching value of define "__SSE4_2__" : 1 00:35:32.903 Fetching value of define "__AES__" : 1 00:35:32.903 Fetching value of define "__AVX__" : 1 00:35:32.903 Fetching value of define "__AVX2__" : 1 00:35:32.903 Fetching value of define "__AVX512BW__" : 1 00:35:32.903 Fetching value of define "__AVX512CD__" : 1 00:35:32.903 Fetching value of define "__AVX512DQ__" : 1 00:35:32.903 Fetching value of define "__AVX512F__" : 1 00:35:32.903 Fetching value of define "__AVX512VL__" : 1 00:35:32.903 Fetching value of define "__PCLMUL__" : 1 00:35:32.903 Fetching value of define "__RDRND__" : 1 00:35:32.903 Fetching value of define "__RDSEED__" : 1 00:35:32.903 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:35:32.903 Fetching value of define "__znver1__" : (undefined) 00:35:32.903 Fetching value of define "__znver2__" : (undefined) 00:35:32.903 Fetching value of define "__znver3__" : (undefined) 00:35:32.903 Fetching value of define "__znver4__" : (undefined) 00:35:32.904 Compiler for C supports arguments -ffat-lto-objects: YES 00:35:32.904 Library asan found: YES 00:35:32.904 Compiler for C supports arguments -Wno-format-truncation: YES 00:35:32.904 Message: lib/log: Defining dependency "log" 00:35:32.904 Message: lib/kvargs: Defining dependency "kvargs" 00:35:32.904 Message: lib/telemetry: Defining dependency "telemetry" 00:35:32.904 Library rt found: YES 00:35:32.904 Checking for function "getentropy" : NO 00:35:32.904 Message: lib/eal: Defining dependency "eal" 00:35:32.904 Message: lib/ring: Defining dependency "ring" 00:35:32.904 Message: lib/rcu: Defining dependency "rcu" 00:35:32.904 Message: lib/mempool: Defining dependency "mempool" 00:35:32.904 Message: lib/mbuf: Defining dependency "mbuf" 00:35:32.904 Fetching value of define "__PCLMUL__" : 1 (cached) 00:35:32.904 Fetching value of define "__AVX512F__" : 1 (cached) 00:35:32.904 Fetching value of define "__AVX512BW__" : 1 (cached) 00:35:32.904 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:35:32.904 Fetching value of define "__AVX512VL__" : 1 (cached) 00:35:32.904 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:35:32.904 Compiler for C supports arguments -mpclmul: YES 00:35:32.904 Compiler for C supports arguments -maes: YES 00:35:32.904 Compiler for C supports arguments -mavx512f: YES (cached) 00:35:32.904 Compiler for C supports arguments -mavx512bw: YES 00:35:32.904 Compiler for C supports arguments -mavx512dq: YES 00:35:32.904 Compiler for C supports arguments -mavx512vl: YES 00:35:32.904 Compiler for C supports arguments -mvpclmulqdq: YES 00:35:32.904 Compiler for C supports arguments -mavx2: YES 00:35:32.904 Compiler for C supports arguments -mavx: YES 00:35:32.904 Message: lib/net: Defining dependency "net" 00:35:32.904 Message: lib/meter: Defining dependency "meter" 00:35:32.904 Message: lib/ethdev: Defining dependency "ethdev" 00:35:32.904 Message: lib/pci: Defining dependency "pci" 00:35:32.904 Message: lib/cmdline: Defining dependency "cmdline" 00:35:32.904 Message: lib/hash: Defining dependency "hash" 00:35:32.904 Message: lib/timer: Defining dependency "timer" 00:35:32.904 Message: lib/compressdev: Defining dependency "compressdev" 00:35:32.904 Message: lib/cryptodev: Defining dependency "cryptodev" 00:35:32.904 Message: lib/dmadev: Defining dependency "dmadev" 00:35:32.904 Compiler for C supports arguments -Wno-cast-qual: YES 00:35:32.904 Message: lib/power: Defining dependency "power" 00:35:32.904 Message: lib/reorder: Defining dependency "reorder" 00:35:32.904 Message: lib/security: Defining dependency "security" 00:35:32.904 Has header "linux/userfaultfd.h" : YES 00:35:32.904 Has header "linux/vduse.h" : NO 00:35:32.904 Message: lib/vhost: Defining dependency "vhost" 00:35:32.904 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:35:32.904 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:35:32.904 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:35:32.904 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:35:32.904 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:35:32.904 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:35:32.904 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:35:32.904 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:35:32.904 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:35:32.904 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:35:32.904 Program doxygen found: YES (/usr/bin/doxygen) 00:35:32.904 Configuring doxy-api-html.conf using configuration 00:35:32.904 Configuring doxy-api-man.conf using configuration 00:35:32.904 Program mandb found: YES (/usr/bin/mandb) 00:35:32.904 Program sphinx-build found: NO 00:35:32.904 Configuring rte_build_config.h using configuration 00:35:32.904 Message: 00:35:32.904 ================= 00:35:32.904 Applications Enabled 00:35:32.904 ================= 00:35:32.904 00:35:32.904 apps: 00:35:32.904 00:35:32.904 00:35:32.904 Message: 00:35:32.904 ================= 00:35:32.904 Libraries Enabled 00:35:32.904 ================= 00:35:32.904 00:35:32.904 libs: 00:35:32.904 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:35:32.904 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:35:32.904 cryptodev, dmadev, power, reorder, security, vhost, 00:35:32.904 00:35:32.904 Message: 00:35:32.904 =============== 00:35:32.904 Drivers Enabled 00:35:32.904 =============== 00:35:32.904 00:35:32.904 common: 00:35:32.904 00:35:32.904 bus: 00:35:32.904 pci, vdev, 00:35:32.904 mempool: 00:35:32.904 ring, 00:35:32.904 dma: 00:35:32.904 00:35:32.904 net: 00:35:32.904 00:35:32.904 crypto: 00:35:32.904 00:35:32.904 compress: 00:35:32.904 00:35:32.904 vdpa: 00:35:32.904 00:35:32.904 00:35:32.904 Message: 00:35:32.904 ================= 00:35:32.904 Content Skipped 00:35:32.904 ================= 00:35:32.904 00:35:32.904 apps: 00:35:32.904 dumpcap: explicitly disabled via build config 00:35:32.904 graph: explicitly disabled via build config 00:35:32.904 pdump: explicitly disabled via build config 00:35:32.904 proc-info: explicitly disabled via build config 00:35:32.904 test-acl: explicitly disabled via build config 00:35:32.904 test-bbdev: explicitly disabled via build config 00:35:32.904 test-cmdline: explicitly disabled via build config 00:35:32.904 test-compress-perf: explicitly disabled via build config 00:35:32.904 test-crypto-perf: explicitly disabled via build config 00:35:32.904 test-dma-perf: explicitly disabled via build config 00:35:32.904 test-eventdev: explicitly disabled via build config 00:35:32.904 test-fib: explicitly disabled via build config 00:35:32.904 test-flow-perf: explicitly disabled via build config 00:35:32.904 test-gpudev: explicitly disabled via build config 00:35:32.904 test-mldev: explicitly disabled via build config 00:35:32.904 test-pipeline: explicitly disabled via build config 00:35:32.904 test-pmd: explicitly disabled via build config 00:35:32.904 test-regex: explicitly disabled via build config 00:35:32.904 test-sad: explicitly disabled via build config 00:35:32.904 test-security-perf: explicitly disabled via build config 00:35:32.904 00:35:32.904 libs: 00:35:32.904 metrics: explicitly disabled via build config 00:35:32.904 acl: explicitly disabled via build config 00:35:32.904 bbdev: explicitly disabled via build config 00:35:32.904 bitratestats: explicitly disabled via build config 00:35:32.904 bpf: explicitly disabled via build config 00:35:32.904 cfgfile: explicitly disabled via build config 00:35:32.904 distributor: explicitly disabled via build config 00:35:32.904 efd: explicitly disabled via build config 00:35:32.904 eventdev: explicitly disabled via build config 00:35:32.904 dispatcher: explicitly disabled via build config 00:35:32.904 gpudev: explicitly disabled via build config 00:35:32.904 gro: explicitly disabled via build config 00:35:32.904 gso: explicitly disabled via build config 00:35:32.904 ip_frag: explicitly disabled via build config 00:35:32.904 jobstats: explicitly disabled via build config 00:35:32.904 latencystats: explicitly disabled via build config 00:35:32.904 lpm: explicitly disabled via build config 00:35:32.904 member: explicitly disabled via build config 00:35:32.904 pcapng: explicitly disabled via build config 00:35:32.904 rawdev: explicitly disabled via build config 00:35:32.904 regexdev: explicitly disabled via build config 00:35:32.904 mldev: explicitly disabled via build config 00:35:32.904 rib: explicitly disabled via build config 00:35:32.904 sched: explicitly disabled via build config 00:35:32.904 stack: explicitly disabled via build config 00:35:32.904 ipsec: explicitly disabled via build config 00:35:32.904 pdcp: explicitly disabled via build config 00:35:32.904 fib: explicitly disabled via build config 00:35:32.904 port: explicitly disabled via build config 00:35:32.904 pdump: explicitly disabled via build config 00:35:32.904 table: explicitly disabled via build config 00:35:32.904 pipeline: explicitly disabled via build config 00:35:32.904 graph: explicitly disabled via build config 00:35:32.904 node: explicitly disabled via build config 00:35:32.904 00:35:32.904 drivers: 00:35:32.904 common/cpt: not in enabled drivers build config 00:35:32.904 common/dpaax: not in enabled drivers build config 00:35:32.904 common/iavf: not in enabled drivers build config 00:35:32.904 common/idpf: not in enabled drivers build config 00:35:32.904 common/mvep: not in enabled drivers build config 00:35:32.904 common/octeontx: not in enabled drivers build config 00:35:32.904 bus/auxiliary: not in enabled drivers build config 00:35:32.904 bus/cdx: not in enabled drivers build config 00:35:32.904 bus/dpaa: not in enabled drivers build config 00:35:32.904 bus/fslmc: not in enabled drivers build config 00:35:32.904 bus/ifpga: not in enabled drivers build config 00:35:32.904 bus/platform: not in enabled drivers build config 00:35:32.904 bus/vmbus: not in enabled drivers build config 00:35:32.904 common/cnxk: not in enabled drivers build config 00:35:32.904 common/mlx5: not in enabled drivers build config 00:35:32.904 common/nfp: not in enabled drivers build config 00:35:32.904 common/qat: not in enabled drivers build config 00:35:32.904 common/sfc_efx: not in enabled drivers build config 00:35:32.904 mempool/bucket: not in enabled drivers build config 00:35:32.904 mempool/cnxk: not in enabled drivers build config 00:35:32.904 mempool/dpaa: not in enabled drivers build config 00:35:32.904 mempool/dpaa2: not in enabled drivers build config 00:35:32.904 mempool/octeontx: not in enabled drivers build config 00:35:32.904 mempool/stack: not in enabled drivers build config 00:35:32.904 dma/cnxk: not in enabled drivers build config 00:35:32.904 dma/dpaa: not in enabled drivers build config 00:35:32.904 dma/dpaa2: not in enabled drivers build config 00:35:32.904 dma/hisilicon: not in enabled drivers build config 00:35:32.904 dma/idxd: not in enabled drivers build config 00:35:32.904 dma/ioat: not in enabled drivers build config 00:35:32.904 dma/skeleton: not in enabled drivers build config 00:35:32.904 net/af_packet: not in enabled drivers build config 00:35:32.904 net/af_xdp: not in enabled drivers build config 00:35:32.904 net/ark: not in enabled drivers build config 00:35:32.904 net/atlantic: not in enabled drivers build config 00:35:32.904 net/avp: not in enabled drivers build config 00:35:32.904 net/axgbe: not in enabled drivers build config 00:35:32.904 net/bnx2x: not in enabled drivers build config 00:35:32.904 net/bnxt: not in enabled drivers build config 00:35:32.904 net/bonding: not in enabled drivers build config 00:35:32.904 net/cnxk: not in enabled drivers build config 00:35:32.904 net/cpfl: not in enabled drivers build config 00:35:32.904 net/cxgbe: not in enabled drivers build config 00:35:32.904 net/dpaa: not in enabled drivers build config 00:35:32.904 net/dpaa2: not in enabled drivers build config 00:35:32.904 net/e1000: not in enabled drivers build config 00:35:32.904 net/ena: not in enabled drivers build config 00:35:32.905 net/enetc: not in enabled drivers build config 00:35:32.905 net/enetfec: not in enabled drivers build config 00:35:32.905 net/enic: not in enabled drivers build config 00:35:32.905 net/failsafe: not in enabled drivers build config 00:35:32.905 net/fm10k: not in enabled drivers build config 00:35:32.905 net/gve: not in enabled drivers build config 00:35:32.905 net/hinic: not in enabled drivers build config 00:35:32.905 net/hns3: not in enabled drivers build config 00:35:32.905 net/i40e: not in enabled drivers build config 00:35:32.905 net/iavf: not in enabled drivers build config 00:35:32.905 net/ice: not in enabled drivers build config 00:35:32.905 net/idpf: not in enabled drivers build config 00:35:32.905 net/igc: not in enabled drivers build config 00:35:32.905 net/ionic: not in enabled drivers build config 00:35:32.905 net/ipn3ke: not in enabled drivers build config 00:35:32.905 net/ixgbe: not in enabled drivers build config 00:35:32.905 net/mana: not in enabled drivers build config 00:35:32.905 net/memif: not in enabled drivers build config 00:35:32.905 net/mlx4: not in enabled drivers build config 00:35:32.905 net/mlx5: not in enabled drivers build config 00:35:32.905 net/mvneta: not in enabled drivers build config 00:35:32.905 net/mvpp2: not in enabled drivers build config 00:35:32.905 net/netvsc: not in enabled drivers build config 00:35:32.905 net/nfb: not in enabled drivers build config 00:35:32.905 net/nfp: not in enabled drivers build config 00:35:32.905 net/ngbe: not in enabled drivers build config 00:35:32.905 net/null: not in enabled drivers build config 00:35:32.905 net/octeontx: not in enabled drivers build config 00:35:32.905 net/octeon_ep: not in enabled drivers build config 00:35:32.905 net/pcap: not in enabled drivers build config 00:35:32.905 net/pfe: not in enabled drivers build config 00:35:32.905 net/qede: not in enabled drivers build config 00:35:32.905 net/ring: not in enabled drivers build config 00:35:32.905 net/sfc: not in enabled drivers build config 00:35:32.905 net/softnic: not in enabled drivers build config 00:35:32.905 net/tap: not in enabled drivers build config 00:35:32.905 net/thunderx: not in enabled drivers build config 00:35:32.905 net/txgbe: not in enabled drivers build config 00:35:32.905 net/vdev_netvsc: not in enabled drivers build config 00:35:32.905 net/vhost: not in enabled drivers build config 00:35:32.905 net/virtio: not in enabled drivers build config 00:35:32.905 net/vmxnet3: not in enabled drivers build config 00:35:32.905 raw/*: missing internal dependency, "rawdev" 00:35:32.905 crypto/armv8: not in enabled drivers build config 00:35:32.905 crypto/bcmfs: not in enabled drivers build config 00:35:32.905 crypto/caam_jr: not in enabled drivers build config 00:35:32.905 crypto/ccp: not in enabled drivers build config 00:35:32.905 crypto/cnxk: not in enabled drivers build config 00:35:32.905 crypto/dpaa_sec: not in enabled drivers build config 00:35:32.905 crypto/dpaa2_sec: not in enabled drivers build config 00:35:32.905 crypto/ipsec_mb: not in enabled drivers build config 00:35:32.905 crypto/mlx5: not in enabled drivers build config 00:35:32.905 crypto/mvsam: not in enabled drivers build config 00:35:32.905 crypto/nitrox: not in enabled drivers build config 00:35:32.905 crypto/null: not in enabled drivers build config 00:35:32.905 crypto/octeontx: not in enabled drivers build config 00:35:32.905 crypto/openssl: not in enabled drivers build config 00:35:32.905 crypto/scheduler: not in enabled drivers build config 00:35:32.905 crypto/uadk: not in enabled drivers build config 00:35:32.905 crypto/virtio: not in enabled drivers build config 00:35:32.905 compress/isal: not in enabled drivers build config 00:35:32.905 compress/mlx5: not in enabled drivers build config 00:35:32.905 compress/octeontx: not in enabled drivers build config 00:35:32.905 compress/zlib: not in enabled drivers build config 00:35:32.905 regex/*: missing internal dependency, "regexdev" 00:35:32.905 ml/*: missing internal dependency, "mldev" 00:35:32.905 vdpa/ifc: not in enabled drivers build config 00:35:32.905 vdpa/mlx5: not in enabled drivers build config 00:35:32.905 vdpa/nfp: not in enabled drivers build config 00:35:32.905 vdpa/sfc: not in enabled drivers build config 00:35:32.905 event/*: missing internal dependency, "eventdev" 00:35:32.905 baseband/*: missing internal dependency, "bbdev" 00:35:32.905 gpu/*: missing internal dependency, "gpudev" 00:35:32.905 00:35:32.905 00:35:33.165 Build targets in project: 85 00:35:33.165 00:35:33.165 DPDK 23.11.0 00:35:33.165 00:35:33.165 User defined options 00:35:33.165 default_library : static 00:35:33.165 libdir : lib 00:35:33.165 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:33.165 b_lto : true 00:35:33.165 b_sanitize : address 00:35:33.165 c_args : -fPIC -Werror 00:35:33.165 c_link_args : 00:35:33.165 cpu_instruction_set: native 00:35:33.165 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:35:33.165 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:35:33.165 enable_docs : false 00:35:33.165 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:35:33.165 enable_kmods : false 00:35:33.165 tests : false 00:35:33.165 00:35:33.165 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:35:33.734 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:35:33.734 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:35:33.734 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:35:33.734 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:35:33.734 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:35:33.734 [5/264] Linking static target lib/librte_kvargs.a 00:35:33.734 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:35:33.734 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:35:33.993 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:35:33.993 [9/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:35:33.993 [10/264] Linking static target lib/librte_log.a 00:35:33.993 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:35:33.993 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:35:33.993 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:35:33.993 [14/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:35:34.251 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:35:34.251 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:35:34.251 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:35:34.251 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:35:34.510 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:35:34.510 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:35:34.510 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:35:34.510 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:35:34.510 [23/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:35:34.510 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:35:34.769 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:35:34.769 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:35:34.769 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:35:34.769 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:35:34.769 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:35:34.769 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:35:34.769 [31/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:35:34.769 [32/264] Linking static target lib/librte_telemetry.a 00:35:35.028 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:35:35.028 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:35:35.028 [35/264] Linking target lib/librte_log.so.24.0 00:35:35.028 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:35:35.028 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:35:35.028 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:35:35.028 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:35:35.028 [40/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:35:35.286 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:35:35.286 [42/264] Linking target lib/librte_kvargs.so.24.0 00:35:35.286 [43/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:35:35.286 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:35:35.286 [45/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:35:35.545 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:35:35.545 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:35:35.545 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:35:35.545 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:35:35.545 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:35:35.804 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:35:35.804 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:35:35.804 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:35:35.804 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:35:35.804 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:35:35.804 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:35:35.804 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:35:35.804 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:35:35.804 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:35:35.804 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:35:36.063 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:35:36.063 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:35:36.063 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:35:36.063 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:35:36.063 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:35:36.063 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:35:36.323 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:35:36.323 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:35:36.323 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:35:36.323 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:35:36.323 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:35:36.323 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:35:36.323 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:35:36.323 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:35:36.323 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:35:36.582 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:35:36.582 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:35:36.583 [78/264] Linking target lib/librte_telemetry.so.24.0 00:35:36.583 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:35:36.583 [80/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:35:36.841 [81/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:35:36.841 [82/264] Linking static target lib/librte_ring.a 00:35:36.841 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:35:36.841 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:35:36.841 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:35:36.841 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:35:37.100 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:35:37.100 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:35:37.100 [89/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:35:37.100 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:35:37.100 [91/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:35:37.358 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:35:37.358 [93/264] Linking static target lib/librte_eal.a 00:35:37.358 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:35:37.358 [95/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:35:37.358 [96/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:35:37.358 [97/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:35:37.358 [98/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:35:37.358 [99/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:35:37.358 [100/264] Linking static target lib/librte_mempool.a 00:35:37.617 [101/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:35:37.617 [102/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:35:37.617 [103/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:35:37.617 [104/264] Linking static target lib/librte_rcu.a 00:35:37.617 [105/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:35:37.617 [106/264] Linking static target lib/librte_net.a 00:35:37.875 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:35:37.875 [108/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:35:37.875 [109/264] Linking static target lib/librte_meter.a 00:35:37.875 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:35:37.875 [111/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:35:37.875 [112/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:35:38.133 [113/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:35:38.133 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:35:38.133 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:35:38.133 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:35:38.392 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:35:38.649 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:35:38.907 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:35:38.907 [120/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:35:38.907 [121/264] Linking static target lib/librte_mbuf.a 00:35:38.907 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:35:38.907 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:35:39.190 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:35:39.190 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:35:39.190 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:35:39.190 [127/264] Linking static target lib/librte_pci.a 00:35:39.190 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:35:39.190 [129/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:35:39.190 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:35:39.190 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:35:39.446 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:35:39.446 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:35:39.446 [134/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:35:39.446 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:35:39.446 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:35:39.446 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:35:39.446 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:35:39.446 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:35:39.446 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:35:39.704 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:35:39.704 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:35:39.704 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:35:39.962 [144/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:35:39.962 [145/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:35:39.962 [146/264] Linking static target lib/librte_cmdline.a 00:35:39.962 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:35:40.219 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:35:40.219 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:35:40.219 [150/264] Linking static target lib/librte_timer.a 00:35:40.219 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:35:40.219 [152/264] Linking static target lib/librte_compressdev.a 00:35:40.219 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:35:40.476 [154/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:35:40.476 [155/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:35:40.476 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:35:40.734 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:35:40.734 [158/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:35:40.734 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:35:40.734 [160/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:40.734 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:35:40.992 [162/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:35:40.992 [163/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:35:41.251 [164/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:35:41.251 [165/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:35:41.251 [166/264] Linking static target lib/librte_dmadev.a 00:35:41.251 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:35:41.508 [168/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:35:41.508 [169/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:35:41.508 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:35:41.508 [171/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:35:41.764 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:35:41.764 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:41.764 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:35:42.021 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:35:42.021 [176/264] Linking static target lib/librte_power.a 00:35:42.021 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:35:42.021 [178/264] Linking static target lib/librte_reorder.a 00:35:42.279 [179/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:35:42.279 [180/264] Linking static target lib/librte_security.a 00:35:42.279 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:35:42.279 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:35:42.537 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:35:42.537 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:35:42.537 [185/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:35:42.795 [186/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:35:43.053 [187/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:35:43.053 [188/264] Linking static target lib/librte_cryptodev.a 00:35:43.309 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:35:43.310 [190/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:35:43.310 [191/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:35:43.567 [192/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:35:43.567 [193/264] Linking static target lib/librte_ethdev.a 00:35:43.567 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:35:43.823 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:35:44.079 [196/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:35:44.079 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:35:44.079 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:35:44.079 [199/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:35:44.338 [200/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:44.338 [201/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:35:44.338 [202/264] Linking static target lib/librte_hash.a 00:35:44.596 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:35:44.596 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:35:44.596 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:35:44.596 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:35:44.853 [207/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:35:44.853 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:35:44.853 [209/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:35:44.853 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:35:44.853 [211/264] Linking static target drivers/librte_bus_vdev.a 00:35:44.853 [212/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:35:44.853 [213/264] Linking static target drivers/librte_bus_pci.a 00:35:44.853 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:35:44.853 [215/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:35:45.110 [216/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:35:45.110 [217/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:45.110 [218/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:35:45.110 [219/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:35:45.368 [220/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:35:45.368 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:35:45.368 [222/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:35:45.368 [223/264] Linking static target drivers/librte_mempool_ring.a 00:35:45.625 [224/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:35:52.221 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:35:55.574 [226/264] Linking target lib/librte_eal.so.24.0 00:35:55.574 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:35:55.574 [228/264] Linking target lib/librte_pci.so.24.0 00:35:55.574 [229/264] Linking target lib/librte_meter.so.24.0 00:35:55.574 [230/264] Linking target lib/librte_ring.so.24.0 00:35:55.574 [231/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:35:55.574 [232/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:35:55.574 [233/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:35:55.574 [234/264] Linking target drivers/librte_bus_vdev.so.24.0 00:35:55.574 [235/264] Linking target lib/librte_timer.so.24.0 00:35:55.832 [236/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:35:55.832 [237/264] Linking target lib/librte_dmadev.so.24.0 00:35:56.090 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:35:56.349 [239/264] Linking target lib/librte_mempool.so.24.0 00:35:56.349 [240/264] Linking target lib/librte_rcu.so.24.0 00:35:56.608 [241/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:35:56.608 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:35:56.873 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:35:57.133 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:35:57.700 [245/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:35:58.266 [246/264] Linking target lib/librte_mbuf.so.24.0 00:35:58.524 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:35:58.782 [248/264] Linking target lib/librte_reorder.so.24.0 00:35:59.041 [249/264] Linking target lib/librte_compressdev.so.24.0 00:35:59.299 [250/264] Linking target lib/librte_net.so.24.0 00:35:59.557 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:36:00.948 [252/264] Linking target lib/librte_cmdline.so.24.0 00:36:00.948 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:36:00.948 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:36:01.207 [255/264] Linking target lib/librte_security.so.24.0 00:36:04.497 [256/264] Linking target lib/librte_hash.so.24.0 00:36:04.497 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:36:11.063 [258/264] Linking target lib/librte_ethdev.so.24.0 00:36:11.063 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:36:12.967 [260/264] Linking target lib/librte_power.so.24.0 00:36:22.942 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:36:22.942 [262/264] Linking static target lib/librte_vhost.a 00:36:24.842 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:37:21.103 [264/264] Linking target lib/librte_vhost.so.24.0 00:37:21.103 INFO: autodetecting backend as ninja 00:37:21.103 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:37:21.103 CC lib/ut_mock/mock.o 00:37:21.103 CC lib/ut/ut.o 00:37:21.103 CC lib/log/log.o 00:37:21.103 CC lib/log/log_flags.o 00:37:21.103 CC lib/log/log_deprecated.o 00:37:21.103 LIB libspdk_ut_mock.a 00:37:21.103 LIB libspdk_log.a 00:37:21.103 LIB libspdk_ut.a 00:37:21.103 CXX lib/trace_parser/trace.o 00:37:21.103 CC lib/util/base64.o 00:37:21.103 CC lib/util/cpuset.o 00:37:21.103 CC lib/dma/dma.o 00:37:21.103 CC lib/util/bit_array.o 00:37:21.103 CC lib/util/crc16.o 00:37:21.103 CC lib/util/crc32.o 00:37:21.103 CC lib/util/crc32c.o 00:37:21.103 CC lib/ioat/ioat.o 00:37:21.103 CC lib/vfio_user/host/vfio_user_pci.o 00:37:21.103 CC lib/util/crc64.o 00:37:21.103 CC lib/util/crc32_ieee.o 00:37:21.103 LIB libspdk_dma.a 00:37:21.103 CC lib/util/dif.o 00:37:21.103 CC lib/vfio_user/host/vfio_user.o 00:37:21.103 CC lib/util/fd.o 00:37:21.103 CC lib/util/file.o 00:37:21.103 CC lib/util/hexlify.o 00:37:21.103 CC lib/util/math.o 00:37:21.103 CC lib/util/iov.o 00:37:21.103 CC lib/util/pipe.o 00:37:21.103 LIB libspdk_ioat.a 00:37:21.103 LIB libspdk_vfio_user.a 00:37:21.103 CC lib/util/strerror_tls.o 00:37:21.103 CC lib/util/string.o 00:37:21.103 CC lib/util/uuid.o 00:37:21.103 CC lib/util/fd_group.o 00:37:21.103 CC lib/util/xor.o 00:37:21.103 CC lib/util/zipf.o 00:37:21.103 LIB libspdk_util.a 00:37:21.103 LIB libspdk_trace_parser.a 00:37:21.103 CC lib/env_dpdk/pci.o 00:37:21.104 CC lib/env_dpdk/memory.o 00:37:21.104 CC lib/env_dpdk/threads.o 00:37:21.104 CC lib/env_dpdk/init.o 00:37:21.104 CC lib/env_dpdk/env.o 00:37:21.104 CC lib/json/json_parse.o 00:37:21.104 CC lib/idxd/idxd.o 00:37:21.104 CC lib/rdma/common.o 00:37:21.104 CC lib/vmd/vmd.o 00:37:21.104 CC lib/conf/conf.o 00:37:21.104 CC lib/vmd/led.o 00:37:21.104 CC lib/json/json_util.o 00:37:21.104 LIB libspdk_conf.a 00:37:21.104 CC lib/json/json_write.o 00:37:21.104 CC lib/idxd/idxd_user.o 00:37:21.104 CC lib/env_dpdk/pci_ioat.o 00:37:21.104 CC lib/env_dpdk/pci_virtio.o 00:37:21.104 CC lib/rdma/rdma_verbs.o 00:37:21.104 CC lib/env_dpdk/pci_vmd.o 00:37:21.104 CC lib/env_dpdk/pci_idxd.o 00:37:21.104 CC lib/env_dpdk/pci_event.o 00:37:21.104 CC lib/env_dpdk/sigbus_handler.o 00:37:21.104 LIB libspdk_vmd.a 00:37:21.104 LIB libspdk_idxd.a 00:37:21.104 CC lib/env_dpdk/pci_dpdk.o 00:37:21.104 LIB libspdk_json.a 00:37:21.104 CC lib/env_dpdk/pci_dpdk_2207.o 00:37:21.104 CC lib/env_dpdk/pci_dpdk_2211.o 00:37:21.104 LIB libspdk_rdma.a 00:37:21.104 CC lib/jsonrpc/jsonrpc_server.o 00:37:21.104 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:37:21.104 CC lib/jsonrpc/jsonrpc_client.o 00:37:21.104 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:37:21.104 LIB libspdk_jsonrpc.a 00:37:21.104 LIB libspdk_env_dpdk.a 00:37:21.104 CC lib/rpc/rpc.o 00:37:21.104 LIB libspdk_rpc.a 00:37:21.104 CC lib/sock/sock_rpc.o 00:37:21.104 CC lib/sock/sock.o 00:37:21.104 CC lib/trace/trace.o 00:37:21.104 CC lib/trace/trace_rpc.o 00:37:21.104 CC lib/trace/trace_flags.o 00:37:21.104 CC lib/notify/notify.o 00:37:21.104 CC lib/notify/notify_rpc.o 00:37:21.104 LIB libspdk_notify.a 00:37:21.104 LIB libspdk_trace.a 00:37:21.104 LIB libspdk_sock.a 00:37:21.104 CC lib/thread/iobuf.o 00:37:21.104 CC lib/thread/thread.o 00:37:21.104 CC lib/nvme/nvme_ns_cmd.o 00:37:21.104 CC lib/nvme/nvme_pcie_common.o 00:37:21.104 CC lib/nvme/nvme_ctrlr_cmd.o 00:37:21.104 CC lib/nvme/nvme_fabric.o 00:37:21.104 CC lib/nvme/nvme_ns.o 00:37:21.104 CC lib/nvme/nvme_ctrlr.o 00:37:21.104 CC lib/nvme/nvme_pcie.o 00:37:21.104 CC lib/nvme/nvme_qpair.o 00:37:21.104 CC lib/nvme/nvme.o 00:37:21.104 CC lib/nvme/nvme_quirks.o 00:37:21.104 CC lib/nvme/nvme_transport.o 00:37:21.104 CC lib/nvme/nvme_discovery.o 00:37:21.104 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:37:21.104 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:37:21.104 CC lib/nvme/nvme_tcp.o 00:37:21.104 CC lib/nvme/nvme_opal.o 00:37:21.104 LIB libspdk_thread.a 00:37:21.104 CC lib/nvme/nvme_io_msg.o 00:37:21.104 CC lib/nvme/nvme_poll_group.o 00:37:21.104 CC lib/nvme/nvme_zns.o 00:37:21.104 CC lib/nvme/nvme_cuse.o 00:37:21.104 CC lib/nvme/nvme_vfio_user.o 00:37:21.104 CC lib/nvme/nvme_rdma.o 00:37:21.104 CC lib/accel/accel.o 00:37:21.104 CC lib/blob/blobstore.o 00:37:21.104 CC lib/blob/request.o 00:37:21.104 CC lib/blob/zeroes.o 00:37:21.104 CC lib/virtio/virtio.o 00:37:21.104 CC lib/init/json_config.o 00:37:21.104 CC lib/init/subsystem.o 00:37:21.104 CC lib/init/subsystem_rpc.o 00:37:21.104 CC lib/accel/accel_rpc.o 00:37:21.104 CC lib/accel/accel_sw.o 00:37:21.104 CC lib/init/rpc.o 00:37:21.104 CC lib/virtio/virtio_vhost_user.o 00:37:21.104 CC lib/virtio/virtio_vfio_user.o 00:37:21.104 CC lib/virtio/virtio_pci.o 00:37:21.104 LIB libspdk_init.a 00:37:21.104 CC lib/blob/blob_bs_dev.o 00:37:21.104 CC lib/event/app.o 00:37:21.104 CC lib/event/reactor.o 00:37:21.104 CC lib/event/app_rpc.o 00:37:21.104 CC lib/event/log_rpc.o 00:37:21.104 CC lib/event/scheduler_static.o 00:37:21.104 LIB libspdk_virtio.a 00:37:21.363 LIB libspdk_accel.a 00:37:21.363 LIB libspdk_nvme.a 00:37:21.363 LIB libspdk_event.a 00:37:21.363 CC lib/bdev/bdev.o 00:37:21.363 CC lib/bdev/bdev_rpc.o 00:37:21.363 CC lib/bdev/bdev_zone.o 00:37:21.363 CC lib/bdev/part.o 00:37:21.363 CC lib/bdev/scsi_nvme.o 00:37:22.298 LIB libspdk_blob.a 00:37:22.298 CC lib/lvol/lvol.o 00:37:22.298 CC lib/blobfs/blobfs.o 00:37:22.298 CC lib/blobfs/tree.o 00:37:22.556 LIB libspdk_blobfs.a 00:37:22.815 LIB libspdk_bdev.a 00:37:22.815 LIB libspdk_lvol.a 00:37:22.815 CC lib/nbd/nbd_rpc.o 00:37:22.815 CC lib/scsi/dev.o 00:37:22.815 CC lib/nbd/nbd.o 00:37:22.815 CC lib/scsi/lun.o 00:37:22.815 CC lib/nvmf/ctrlr_discovery.o 00:37:22.815 CC lib/nvmf/ctrlr.o 00:37:22.815 CC lib/scsi/port.o 00:37:22.815 CC lib/nvmf/ctrlr_bdev.o 00:37:22.815 CC lib/nvmf/subsystem.o 00:37:22.815 CC lib/ftl/ftl_core.o 00:37:23.075 CC lib/scsi/scsi.o 00:37:23.075 CC lib/nvmf/nvmf.o 00:37:23.075 CC lib/scsi/scsi_bdev.o 00:37:23.075 CC lib/nvmf/nvmf_rpc.o 00:37:23.075 CC lib/scsi/scsi_pr.o 00:37:23.075 CC lib/ftl/ftl_init.o 00:37:23.075 LIB libspdk_nbd.a 00:37:23.362 CC lib/nvmf/transport.o 00:37:23.362 CC lib/scsi/scsi_rpc.o 00:37:23.362 CC lib/scsi/task.o 00:37:23.362 CC lib/nvmf/tcp.o 00:37:23.362 CC lib/ftl/ftl_layout.o 00:37:23.362 CC lib/ftl/ftl_debug.o 00:37:23.362 CC lib/ftl/ftl_io.o 00:37:23.362 CC lib/nvmf/rdma.o 00:37:23.362 LIB libspdk_scsi.a 00:37:23.362 CC lib/ftl/ftl_sb.o 00:37:23.660 CC lib/ftl/ftl_l2p.o 00:37:23.660 CC lib/ftl/ftl_l2p_flat.o 00:37:23.660 CC lib/ftl/ftl_nv_cache.o 00:37:23.660 CC lib/ftl/ftl_band.o 00:37:23.660 CC lib/ftl/ftl_band_ops.o 00:37:23.660 CC lib/ftl/ftl_writer.o 00:37:23.660 CC lib/ftl/ftl_rq.o 00:37:23.660 CC lib/ftl/ftl_reloc.o 00:37:23.660 CC lib/ftl/ftl_l2p_cache.o 00:37:23.660 CC lib/iscsi/conn.o 00:37:23.660 CC lib/ftl/ftl_p2l.o 00:37:23.660 CC lib/ftl/mngt/ftl_mngt.o 00:37:23.919 CC lib/iscsi/init_grp.o 00:37:23.919 CC lib/iscsi/iscsi.o 00:37:23.919 CC lib/iscsi/md5.o 00:37:23.919 CC lib/iscsi/param.o 00:37:23.919 CC lib/iscsi/portal_grp.o 00:37:23.919 CC lib/iscsi/tgt_node.o 00:37:24.178 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:37:24.178 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:37:24.178 CC lib/ftl/mngt/ftl_mngt_startup.o 00:37:24.178 CC lib/ftl/mngt/ftl_mngt_md.o 00:37:24.178 CC lib/vhost/vhost.o 00:37:24.178 CC lib/vhost/vhost_rpc.o 00:37:24.178 CC lib/vhost/vhost_scsi.o 00:37:24.178 CC lib/iscsi/iscsi_subsystem.o 00:37:24.178 CC lib/iscsi/iscsi_rpc.o 00:37:24.178 CC lib/iscsi/task.o 00:37:24.178 CC lib/ftl/mngt/ftl_mngt_misc.o 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:37:24.438 LIB libspdk_nvmf.a 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_band.o 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:37:24.438 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:37:24.438 CC lib/vhost/vhost_blk.o 00:37:24.438 CC lib/vhost/rte_vhost_user.o 00:37:24.697 CC lib/ftl/utils/ftl_conf.o 00:37:24.697 CC lib/ftl/utils/ftl_md.o 00:37:24.697 CC lib/ftl/utils/ftl_mempool.o 00:37:24.697 CC lib/ftl/utils/ftl_bitmap.o 00:37:24.697 LIB libspdk_iscsi.a 00:37:24.697 CC lib/ftl/utils/ftl_property.o 00:37:24.697 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:37:24.697 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:37:24.697 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:37:24.697 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:37:24.697 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:37:24.697 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:37:24.957 CC lib/ftl/upgrade/ftl_sb_v3.o 00:37:24.957 CC lib/ftl/upgrade/ftl_sb_v5.o 00:37:24.957 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:37:24.957 CC lib/ftl/nvc/ftl_nvc_dev.o 00:37:24.957 CC lib/ftl/base/ftl_base_dev.o 00:37:24.957 CC lib/ftl/base/ftl_base_bdev.o 00:37:25.216 LIB libspdk_ftl.a 00:37:25.216 LIB libspdk_vhost.a 00:37:25.476 CC module/env_dpdk/env_dpdk_rpc.o 00:37:25.476 CC module/accel/ioat/accel_ioat.o 00:37:25.476 CC module/sock/posix/posix.o 00:37:25.476 CC module/accel/iaa/accel_iaa.o 00:37:25.476 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:37:25.476 CC module/scheduler/gscheduler/gscheduler.o 00:37:25.476 CC module/accel/dsa/accel_dsa.o 00:37:25.476 CC module/scheduler/dynamic/scheduler_dynamic.o 00:37:25.476 CC module/blob/bdev/blob_bdev.o 00:37:25.476 CC module/accel/error/accel_error.o 00:37:25.476 LIB libspdk_env_dpdk_rpc.a 00:37:25.476 CC module/accel/dsa/accel_dsa_rpc.o 00:37:25.476 LIB libspdk_scheduler_dpdk_governor.a 00:37:25.736 CC module/accel/ioat/accel_ioat_rpc.o 00:37:25.736 LIB libspdk_scheduler_gscheduler.a 00:37:25.736 CC module/accel/iaa/accel_iaa_rpc.o 00:37:25.736 CC module/accel/error/accel_error_rpc.o 00:37:25.736 LIB libspdk_scheduler_dynamic.a 00:37:25.736 LIB libspdk_accel_dsa.a 00:37:25.736 LIB libspdk_blob_bdev.a 00:37:25.736 LIB libspdk_accel_ioat.a 00:37:25.736 LIB libspdk_accel_error.a 00:37:25.736 LIB libspdk_accel_iaa.a 00:37:25.736 CC module/bdev/gpt/gpt.o 00:37:25.736 CC module/blobfs/bdev/blobfs_bdev.o 00:37:25.736 CC module/bdev/malloc/bdev_malloc.o 00:37:25.736 CC module/bdev/error/vbdev_error.o 00:37:25.736 CC module/bdev/delay/vbdev_delay.o 00:37:25.736 CC module/bdev/lvol/vbdev_lvol.o 00:37:25.736 CC module/bdev/null/bdev_null.o 00:37:25.736 CC module/bdev/nvme/bdev_nvme.o 00:37:25.996 LIB libspdk_sock_posix.a 00:37:25.996 CC module/bdev/passthru/vbdev_passthru.o 00:37:25.996 CC module/bdev/delay/vbdev_delay_rpc.o 00:37:25.996 CC module/bdev/gpt/vbdev_gpt.o 00:37:25.996 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:37:25.996 CC module/bdev/null/bdev_null_rpc.o 00:37:25.996 CC module/bdev/error/vbdev_error_rpc.o 00:37:25.996 CC module/bdev/malloc/bdev_malloc_rpc.o 00:37:25.996 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:37:25.996 LIB libspdk_bdev_delay.a 00:37:25.996 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:37:26.261 LIB libspdk_bdev_gpt.a 00:37:26.261 LIB libspdk_blobfs_bdev.a 00:37:26.261 LIB libspdk_bdev_null.a 00:37:26.261 CC module/bdev/nvme/bdev_nvme_rpc.o 00:37:26.261 LIB libspdk_bdev_error.a 00:37:26.261 LIB libspdk_bdev_malloc.a 00:37:26.261 CC module/bdev/raid/bdev_raid.o 00:37:26.261 CC module/bdev/raid/bdev_raid_rpc.o 00:37:26.261 LIB libspdk_bdev_passthru.a 00:37:26.261 CC module/bdev/split/vbdev_split.o 00:37:26.261 CC module/bdev/zone_block/vbdev_zone_block.o 00:37:26.261 CC module/bdev/aio/bdev_aio.o 00:37:26.261 LIB libspdk_bdev_lvol.a 00:37:26.261 CC module/bdev/ftl/bdev_ftl.o 00:37:26.261 CC module/bdev/ftl/bdev_ftl_rpc.o 00:37:26.261 CC module/bdev/iscsi/bdev_iscsi.o 00:37:26.261 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:37:26.529 CC module/bdev/split/vbdev_split_rpc.o 00:37:26.529 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:37:26.529 CC module/bdev/raid/bdev_raid_sb.o 00:37:26.529 LIB libspdk_bdev_ftl.a 00:37:26.529 CC module/bdev/aio/bdev_aio_rpc.o 00:37:26.529 CC module/bdev/nvme/nvme_rpc.o 00:37:26.529 CC module/bdev/nvme/bdev_mdns_client.o 00:37:26.529 LIB libspdk_bdev_split.a 00:37:26.529 LIB libspdk_bdev_iscsi.a 00:37:26.529 LIB libspdk_bdev_aio.a 00:37:26.529 CC module/bdev/virtio/bdev_virtio_scsi.o 00:37:26.529 CC module/bdev/nvme/vbdev_opal_rpc.o 00:37:26.529 CC module/bdev/nvme/vbdev_opal.o 00:37:26.789 LIB libspdk_bdev_zone_block.a 00:37:26.789 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:37:26.789 CC module/bdev/raid/raid0.o 00:37:26.789 CC module/bdev/virtio/bdev_virtio_blk.o 00:37:26.789 CC module/bdev/raid/raid1.o 00:37:26.789 CC module/bdev/raid/concat.o 00:37:26.789 CC module/bdev/virtio/bdev_virtio_rpc.o 00:37:26.789 CC module/bdev/raid/raid5f.o 00:37:27.046 LIB libspdk_bdev_nvme.a 00:37:27.046 LIB libspdk_bdev_virtio.a 00:37:27.046 LIB libspdk_bdev_raid.a 00:37:27.611 CC module/event/subsystems/iobuf/iobuf.o 00:37:27.611 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:37:27.611 CC module/event/subsystems/vmd/vmd_rpc.o 00:37:27.611 CC module/event/subsystems/vmd/vmd.o 00:37:27.611 CC module/event/subsystems/sock/sock.o 00:37:27.611 CC module/event/subsystems/scheduler/scheduler.o 00:37:27.611 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:37:27.611 LIB libspdk_event_sock.a 00:37:27.612 LIB libspdk_event_vmd.a 00:37:27.612 LIB libspdk_event_scheduler.a 00:37:27.612 LIB libspdk_event_iobuf.a 00:37:27.612 LIB libspdk_event_vhost_blk.a 00:37:27.870 CC module/event/subsystems/accel/accel.o 00:37:27.870 LIB libspdk_event_accel.a 00:37:28.128 CC module/event/subsystems/bdev/bdev.o 00:37:28.386 LIB libspdk_event_bdev.a 00:37:28.386 CC module/event/subsystems/nbd/nbd.o 00:37:28.386 CC module/event/subsystems/scsi/scsi.o 00:37:28.386 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:37:28.386 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:37:28.644 LIB libspdk_event_nbd.a 00:37:28.644 LIB libspdk_event_scsi.a 00:37:28.644 LIB libspdk_event_nvmf.a 00:37:28.644 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:37:28.644 CC module/event/subsystems/iscsi/iscsi.o 00:37:28.904 LIB libspdk_event_vhost_scsi.a 00:37:28.904 LIB libspdk_event_iscsi.a 00:37:29.166 CXX app/trace/trace.o 00:37:29.166 CC examples/ioat/perf/perf.o 00:37:29.166 CC examples/sock/hello_world/hello_sock.o 00:37:29.166 CC examples/accel/perf/accel_perf.o 00:37:29.166 CC test/accel/dif/dif.o 00:37:29.166 CC examples/nvme/hello_world/hello_world.o 00:37:29.166 CC examples/bdev/hello_world/hello_bdev.o 00:37:29.166 CC examples/blob/hello_world/hello_blob.o 00:37:29.166 CC test/bdev/bdevio/bdevio.o 00:37:29.166 CC test/app/bdev_svc/bdev_svc.o 00:37:29.166 LINK ioat_perf 00:37:29.426 LINK hello_sock 00:37:29.426 LINK hello_world 00:37:29.426 LINK hello_bdev 00:37:29.426 LINK hello_blob 00:37:29.426 LINK bdev_svc 00:37:29.426 LINK dif 00:37:29.426 LINK accel_perf 00:37:29.426 LINK bdevio 00:37:29.426 LINK spdk_trace 00:37:39.394 CC app/trace_record/trace_record.o 00:37:39.394 LINK spdk_trace_record 00:37:54.272 CC examples/ioat/verify/verify.o 00:37:54.838 LINK verify 00:38:09.714 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:38:10.762 LINK nvme_fuzz 00:38:11.328 CC app/nvmf_tgt/nvmf_main.o 00:38:12.706 LINK nvmf_tgt 00:38:19.268 CC examples/nvme/reconnect/reconnect.o 00:38:19.268 CC app/iscsi_tgt/iscsi_tgt.o 00:38:19.835 LINK reconnect 00:38:20.401 LINK iscsi_tgt 00:38:28.522 CC examples/blob/cli/blobcli.o 00:38:31.053 LINK blobcli 00:39:27.336 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:39:27.336 CC examples/nvme/nvme_manage/nvme_manage.o 00:39:29.916 LINK nvme_manage 00:39:32.443 LINK iscsi_fuzz 00:40:19.205 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:40:19.205 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:40:19.205 CC examples/bdev/bdevperf/bdevperf.o 00:40:19.205 CC app/spdk_tgt/spdk_tgt.o 00:40:19.205 LINK vhost_fuzz 00:40:19.205 LINK spdk_tgt 00:40:19.205 CC test/blobfs/mkfs/mkfs.o 00:40:19.205 CC examples/vmd/lsvmd/lsvmd.o 00:40:19.205 LINK bdevperf 00:40:19.205 LINK lsvmd 00:40:19.205 LINK mkfs 00:40:34.138 CC examples/nvme/arbitration/arbitration.o 00:40:34.138 CC examples/nvmf/nvmf/nvmf.o 00:40:34.138 LINK arbitration 00:40:34.702 LINK nvmf 00:41:01.273 CC test/app/histogram_perf/histogram_perf.o 00:41:01.845 LINK histogram_perf 00:41:08.406 CC examples/vmd/led/led.o 00:41:09.341 LINK led 00:41:21.540 CC examples/util/zipf/zipf.o 00:41:21.540 LINK zipf 00:41:36.455 CC examples/thread/thread/thread_ex.o 00:41:36.455 CC examples/idxd/perf/perf.o 00:41:36.455 LINK thread 00:41:37.022 CC test/app/jsoncat/jsoncat.o 00:41:37.281 LINK idxd_perf 00:41:37.848 LINK jsoncat 00:41:39.222 CC examples/nvme/hotplug/hotplug.o 00:41:40.648 LINK hotplug 00:41:45.914 CC examples/interrupt_tgt/interrupt_tgt.o 00:41:46.853 LINK interrupt_tgt 00:42:01.751 CC test/app/stub/stub.o 00:42:01.751 LINK stub 00:42:05.940 CC app/spdk_lspci/spdk_lspci.o 00:42:06.508 LINK spdk_lspci 00:42:21.388 CC app/spdk_nvme_perf/perf.o 00:42:25.576 LINK spdk_nvme_perf 00:42:47.501 CC examples/nvme/cmb_copy/cmb_copy.o 00:42:48.068 LINK cmb_copy 00:42:58.048 TEST_HEADER include/spdk/config.h 00:42:58.048 CXX test/cpp_headers/accel_module.o 00:42:58.048 CXX test/cpp_headers/bit_pool.o 00:42:59.424 CXX test/cpp_headers/ioat.o 00:43:00.795 CXX test/cpp_headers/blobfs.o 00:43:02.171 CXX test/cpp_headers/notify.o 00:43:03.593 CXX test/cpp_headers/pipe.o 00:43:03.593 CXX test/cpp_headers/accel.o 00:43:04.525 CXX test/cpp_headers/file.o 00:43:05.459 CC test/dma/test_dma/test_dma.o 00:43:05.757 CXX test/cpp_headers/version.o 00:43:06.017 CXX test/cpp_headers/trace_parser.o 00:43:06.948 CXX test/cpp_headers/opal_spec.o 00:43:07.515 LINK test_dma 00:43:07.775 CXX test/cpp_headers/uuid.o 00:43:08.712 CXX test/cpp_headers/likely.o 00:43:08.970 CXX test/cpp_headers/dif.o 00:43:09.910 CXX test/cpp_headers/memory.o 00:43:10.172 CC examples/nvme/abort/abort.o 00:43:11.108 CXX test/cpp_headers/vfio_user_pci.o 00:43:11.767 CXX test/cpp_headers/dma.o 00:43:11.767 LINK abort 00:43:12.704 CXX test/cpp_headers/nbd.o 00:43:12.963 CXX test/cpp_headers/conf.o 00:43:14.338 CXX test/cpp_headers/env_dpdk.o 00:43:14.905 CXX test/cpp_headers/nvmf_spec.o 00:43:15.843 CC test/env/mem_callbacks/mem_callbacks.o 00:43:16.102 CXX test/cpp_headers/iscsi_spec.o 00:43:17.038 CXX test/cpp_headers/mmio.o 00:43:18.410 CXX test/cpp_headers/json.o 00:43:18.410 LINK mem_callbacks 00:43:19.345 CXX test/cpp_headers/opal.o 00:43:20.731 CXX test/cpp_headers/bdev.o 00:43:22.107 CXX test/cpp_headers/base64.o 00:43:23.044 CXX test/cpp_headers/blobfs_bdev.o 00:43:24.420 CXX test/cpp_headers/nvme_ocssd.o 00:43:25.820 CXX test/cpp_headers/fd.o 00:43:26.757 CXX test/cpp_headers/barrier.o 00:43:28.151 CXX test/cpp_headers/scsi_spec.o 00:43:29.088 CXX test/cpp_headers/zipf.o 00:43:30.023 CXX test/cpp_headers/nvmf.o 00:43:30.959 CXX test/cpp_headers/queue.o 00:43:31.218 CXX test/cpp_headers/xor.o 00:43:31.783 CXX test/cpp_headers/cpuset.o 00:43:32.717 CXX test/cpp_headers/thread.o 00:43:32.975 CXX test/cpp_headers/bdev_zone.o 00:43:32.975 CXX test/cpp_headers/fd_group.o 00:43:33.911 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:43:33.911 CXX test/cpp_headers/tree.o 00:43:33.911 CXX test/cpp_headers/blob_bdev.o 00:43:33.911 CC test/env/vtophys/vtophys.o 00:43:34.481 LINK pmr_persistence 00:43:34.481 LINK vtophys 00:43:34.739 CXX test/cpp_headers/crc64.o 00:43:35.677 CXX test/cpp_headers/assert.o 00:43:36.244 CXX test/cpp_headers/nvme_spec.o 00:43:36.811 CXX test/cpp_headers/endian.o 00:43:37.380 CXX test/cpp_headers/pci_ids.o 00:43:37.380 CXX test/cpp_headers/log.o 00:43:37.639 CXX test/cpp_headers/nvme_ocssd_spec.o 00:43:38.206 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:43:38.464 CXX test/cpp_headers/ftl.o 00:43:38.722 CC app/spdk_nvme_identify/identify.o 00:43:38.979 LINK env_dpdk_post_init 00:43:39.236 CXX test/cpp_headers/config.o 00:43:39.492 CXX test/cpp_headers/vhost.o 00:43:40.422 CXX test/cpp_headers/bdev_module.o 00:43:41.386 CXX test/cpp_headers/nvme_intel.o 00:43:41.645 LINK spdk_nvme_identify 00:43:42.213 CXX test/cpp_headers/idxd_spec.o 00:43:43.587 CXX test/cpp_headers/crc16.o 00:43:43.847 CXX test/cpp_headers/nvme.o 00:43:44.780 CXX test/cpp_headers/stdinc.o 00:43:45.040 CC app/spdk_nvme_discover/discovery_aer.o 00:43:45.040 CC test/event/event_perf/event_perf.o 00:43:45.608 CXX test/cpp_headers/scsi.o 00:43:45.923 LINK event_perf 00:43:46.195 LINK spdk_nvme_discover 00:43:47.134 CXX test/cpp_headers/nvmf_fc_spec.o 00:43:48.106 CXX test/cpp_headers/idxd.o 00:43:49.066 CXX test/cpp_headers/hexlify.o 00:43:50.440 CXX test/cpp_headers/reduce.o 00:43:51.375 CXX test/cpp_headers/crc32.o 00:43:52.361 CXX test/cpp_headers/init.o 00:43:53.297 CXX test/cpp_headers/nvmf_transport.o 00:43:54.670 CXX test/cpp_headers/nvme_zns.o 00:43:56.055 CXX test/cpp_headers/vfio_user_spec.o 00:43:56.622 CXX test/cpp_headers/util.o 00:43:56.881 CXX test/cpp_headers/jsonrpc.o 00:43:57.819 CXX test/cpp_headers/env.o 00:43:58.388 CC test/event/reactor/reactor.o 00:43:58.956 CXX test/cpp_headers/nvmf_cmd.o 00:43:59.215 LINK reactor 00:44:00.153 CC test/event/reactor_perf/reactor_perf.o 00:44:00.411 CXX test/cpp_headers/lvol.o 00:44:00.979 LINK reactor_perf 00:44:01.917 CXX test/cpp_headers/histogram_data.o 00:44:02.176 CC test/event/app_repeat/app_repeat.o 00:44:03.111 CXX test/cpp_headers/event.o 00:44:03.372 LINK app_repeat 00:44:05.277 CXX test/cpp_headers/trace.o 00:44:06.215 CXX test/cpp_headers/ioat_spec.o 00:44:07.594 CXX test/cpp_headers/string.o 00:44:08.973 CXX test/cpp_headers/ublk.o 00:44:10.353 CXX test/cpp_headers/bit_array.o 00:44:11.731 CXX test/cpp_headers/scheduler.o 00:44:12.666 CXX test/cpp_headers/blob.o 00:44:14.039 CXX test/cpp_headers/gpt_spec.o 00:44:15.414 CXX test/cpp_headers/sock.o 00:44:16.346 CXX test/cpp_headers/vmd.o 00:44:17.734 CXX test/cpp_headers/rpc.o 00:44:19.634 CC test/event/scheduler/scheduler.o 00:44:21.012 LINK scheduler 00:44:27.579 CC test/env/memory/memory_ut.o 00:44:28.958 CC app/spdk_top/spdk_top.o 00:44:30.338 LINK memory_ut 00:44:31.274 CC app/vhost/vhost.o 00:44:31.535 LINK spdk_top 00:44:32.104 LINK vhost 00:44:37.372 CC test/lvol/esnap/esnap.o 00:44:38.306 CC test/env/pci/pci_ut.o 00:44:38.306 CC test/rpc_client/rpc_client_test.o 00:44:38.564 CC test/nvme/aer/aer.o 00:44:39.130 LINK rpc_client_test 00:44:39.699 LINK pci_ut 00:44:39.699 LINK aer 00:44:39.699 CC test/nvme/reset/reset.o 00:44:40.636 LINK reset 00:44:50.618 LINK esnap 00:44:50.876 CC app/spdk_dd/spdk_dd.o 00:44:52.786 LINK spdk_dd 00:45:00.913 CC app/fio/nvme/fio_plugin.o 00:45:02.819 CC test/thread/poller_perf/poller_perf.o 00:45:02.819 CC test/thread/lock/spdk_lock.o 00:45:03.078 LINK spdk_nvme 00:45:03.646 LINK poller_perf 00:45:07.833 LINK spdk_lock 00:45:13.107 CC app/fio/bdev/fio_plugin.o 00:45:15.668 LINK spdk_bdev 00:45:30.625 CC test/nvme/sgl/sgl.o 00:45:30.625 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:45:30.625 LINK sgl 00:45:30.625 LINK histogram_ut 00:45:34.814 CC test/unit/lib/accel/accel.c/accel_ut.o 00:45:35.382 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:45:43.505 LINK accel_ut 00:45:46.791 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:45:49.322 LINK blob_bdev_ut 00:45:50.699 CC test/unit/lib/blob/blob.c/blob_ut.o 00:45:53.991 LINK bdev_ut 00:46:04.009 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:46:04.267 LINK tree_ut 00:46:05.637 CC test/unit/lib/dma/dma.c/dma_ut.o 00:46:07.535 LINK dma_ut 00:46:09.439 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:46:13.632 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:46:14.198 LINK blobfs_async_ut 00:46:14.780 LINK blob_ut 00:46:18.068 LINK blobfs_sync_ut 00:46:23.342 CC test/nvme/e2edp/nvme_dp.o 00:46:24.277 LINK nvme_dp 00:46:34.253 CC test/nvme/overhead/overhead.o 00:46:34.818 LINK overhead 00:46:38.127 CC test/unit/lib/event/app.c/app_ut.o 00:46:40.029 LINK app_ut 00:46:40.290 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:46:41.225 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:46:41.791 LINK ioat_ut 00:46:42.400 LINK blobfs_bdev_ut 00:46:46.578 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:46:46.578 CC test/unit/lib/bdev/part.c/part_ut.o 00:46:46.835 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:46:48.737 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:46:49.671 LINK conn_ut 00:46:51.573 LINK reactor_ut 00:46:51.573 LINK json_parse_ut 00:46:55.760 LINK part_ut 00:46:56.387 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:46:57.323 LINK jsonrpc_server_ut 00:47:01.512 CC test/nvme/err_injection/err_injection.o 00:47:02.080 LINK err_injection 00:47:02.646 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:47:04.550 LINK init_grp_ut 00:47:06.456 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:47:06.456 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:47:06.720 LINK scsi_nvme_ut 00:47:07.660 LINK json_util_ut 00:47:07.919 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:47:08.178 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:47:08.744 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:47:10.114 LINK gpt_ut 00:47:10.372 LINK json_write_ut 00:47:10.372 CC test/unit/lib/iscsi/param.c/param_ut.o 00:47:11.745 LINK param_ut 00:47:12.681 LINK iscsi_ut 00:47:15.968 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:47:15.968 CC test/nvme/startup/startup.o 00:47:16.227 LINK startup 00:47:16.797 CC test/nvme/reserve/reserve.o 00:47:17.365 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:47:17.623 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:47:17.623 LINK reserve 00:47:18.191 LINK vbdev_lvol_ut 00:47:19.634 LINK portal_grp_ut 00:47:22.917 CC test/nvme/simple_copy/simple_copy.o 00:47:23.486 LINK simple_copy 00:47:26.017 LINK bdev_ut 00:47:27.394 CC test/nvme/connect_stress/connect_stress.o 00:47:28.330 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:47:28.589 LINK connect_stress 00:47:31.126 LINK tgt_node_ut 00:47:32.065 CC test/nvme/boot_partition/boot_partition.o 00:47:33.441 LINK boot_partition 00:47:41.555 CC test/unit/lib/log/log.c/log_ut.o 00:47:41.555 LINK log_ut 00:47:42.130 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:47:45.422 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:47:46.362 LINK bdev_zone_ut 00:47:46.929 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:47:48.838 LINK bdev_raid_ut 00:47:50.224 LINK vbdev_zone_block_ut 00:47:52.759 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:47:56.048 CC test/unit/lib/notify/notify.c/notify_ut.o 00:47:57.428 LINK notify_ut 00:47:57.428 LINK lvol_ut 00:48:00.715 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:48:00.715 CC test/nvme/compliance/nvme_compliance.o 00:48:01.282 CC test/nvme/fused_ordering/fused_ordering.o 00:48:01.282 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:48:01.883 LINK fused_ordering 00:48:02.171 LINK nvme_compliance 00:48:02.429 LINK bdev_raid_sb_ut 00:48:04.336 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:48:04.904 LINK concat_ut 00:48:05.227 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:48:07.148 LINK raid1_ut 00:48:07.148 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:48:08.086 LINK bdev_nvme_ut 00:48:09.018 CC test/nvme/doorbell_aers/doorbell_aers.o 00:48:09.953 LINK doorbell_aers 00:48:09.953 LINK nvme_ut 00:48:11.854 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:48:15.140 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:48:15.708 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:48:18.996 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:48:19.256 LINK raid5f_ut 00:48:21.160 LINK nvme_ctrlr_ut 00:48:25.346 LINK tcp_ut 00:48:25.939 LINK ctrlr_ut 00:48:28.506 CC test/nvme/fdp/fdp.o 00:48:29.880 LINK fdp 00:48:31.259 CC test/nvme/cuse/cuse.o 00:48:31.827 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:48:34.362 LINK cuse 00:48:34.929 LINK nvme_ctrlr_cmd_ut 00:48:38.218 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:48:39.160 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:48:39.418 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:48:39.983 LINK subsystem_ut 00:48:39.983 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:48:39.983 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:48:40.916 LINK nvme_ctrlr_ocssd_cmd_ut 00:48:40.916 LINK dev_ut 00:48:40.916 LINK nvme_ns_ut 00:48:40.916 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:48:40.916 LINK ctrlr_discovery_ut 00:48:41.851 LINK lun_ut 00:48:42.419 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:48:42.986 LINK scsi_ut 00:48:43.553 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:48:44.487 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:48:44.745 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:48:45.313 LINK scsi_bdev_ut 00:48:45.313 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:48:45.313 LINK scsi_pr_ut 00:48:46.250 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:48:46.818 CC test/unit/lib/sock/sock.c/sock_ut.o 00:48:46.818 CC test/unit/lib/sock/posix.c/posix_ut.o 00:48:46.818 CC test/unit/lib/thread/thread.c/thread_ut.o 00:48:47.091 LINK ctrlr_bdev_ut 00:48:47.091 LINK nvme_ns_ocssd_cmd_ut 00:48:47.659 LINK nvme_ns_cmd_ut 00:48:48.227 LINK posix_ut 00:48:48.227 LINK sock_ut 00:48:48.794 LINK thread_ut 00:48:48.794 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:48:49.731 LINK iobuf_ut 00:48:49.731 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:48:50.358 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:48:50.615 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:48:51.552 LINK nvmf_ut 00:48:51.552 LINK nvme_poll_group_ut 00:48:51.552 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:48:51.811 LINK nvme_pcie_ut 00:48:52.070 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:48:52.070 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:48:52.328 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:48:52.328 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:48:52.586 LINK nvme_quirks_ut 00:48:52.586 LINK nvme_qpair_ut 00:48:53.153 LINK nvme_transport_ut 00:48:53.153 CC test/unit/lib/util/base64.c/base64_ut.o 00:48:53.411 LINK base64_ut 00:48:53.669 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:48:53.669 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:48:53.927 LINK rdma_ut 00:48:53.927 LINK nvme_tcp_ut 00:48:54.185 LINK bit_array_ut 00:48:54.185 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:48:54.442 LINK cpuset_ut 00:48:54.442 LINK nvme_io_msg_ut 00:48:54.699 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:48:54.699 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:48:55.264 LINK crc16_ut 00:48:56.205 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:48:56.463 LINK crc32_ieee_ut 00:48:56.463 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:48:56.721 LINK nvme_pcie_common_ut 00:48:56.721 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:48:56.978 LINK crc32c_ut 00:48:57.236 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:48:57.236 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:48:57.236 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:48:57.494 LINK nvme_fabric_ut 00:48:57.753 LINK crc64_ut 00:48:57.753 LINK pci_event_ut 00:48:57.753 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:48:58.012 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:48:58.012 LINK nvme_opal_ut 00:48:58.012 CC test/unit/lib/util/dif.c/dif_ut.o 00:48:58.271 CC test/unit/lib/util/iov.c/iov_ut.o 00:48:58.271 LINK iov_ut 00:48:58.530 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:48:58.810 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:48:58.810 LINK subsystem_ut 00:48:59.067 LINK dif_ut 00:48:59.067 LINK transport_ut 00:48:59.067 LINK nvme_rdma_ut 00:48:59.067 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:48:59.634 LINK rpc_ut 00:48:59.634 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:48:59.634 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:48:59.892 LINK nvme_cuse_ut 00:49:00.458 LINK idxd_user_ut 00:49:00.716 CC test/unit/lib/rdma/common.c/common_ut.o 00:49:01.652 LINK common_ut 00:49:01.910 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:49:02.168 LINK vhost_ut 00:49:02.168 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:49:02.427 LINK ftl_l2p_ut 00:49:02.427 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:49:02.991 LINK idxd_ut 00:49:02.991 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:49:02.991 LINK ftl_band_ut 00:49:03.251 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:49:03.251 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:49:03.251 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:49:03.511 CC test/unit/lib/util/math.c/math_ut.o 00:49:03.511 LINK ftl_io_ut 00:49:03.511 LINK ftl_bitmap_ut 00:49:03.511 LINK ftl_mempool_ut 00:49:03.774 LINK math_ut 00:49:03.774 LINK pipe_ut 00:49:04.716 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:49:04.716 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:49:04.716 CC test/unit/lib/util/string.c/string_ut.o 00:49:04.716 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:49:04.975 CC test/unit/lib/util/xor.c/xor_ut.o 00:49:04.975 LINK string_ut 00:49:05.233 LINK ftl_mngt_ut 00:49:05.233 LINK xor_ut 00:49:05.491 LINK ftl_layout_upgrade_ut 00:49:05.491 LINK ftl_sb_ut 00:49:27.434 14:13:06 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:49:27.434 make[1]: Nothing to be done for 'clean'. 00:49:31.625 14:13:10 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:49:31.625 14:13:10 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:49:31.625 14:13:10 -- common/autotest_common.sh@10 -- $ set +x 00:49:31.884 14:13:11 -- spdk/autopackage.sh@48 -- $ timing_finish 00:49:31.884 14:13:11 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:31.884 14:13:11 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:49:31.884 14:13:11 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:31.884 + [[ -n 2386 ]] 00:49:31.884 + sudo kill 2386 00:49:31.884 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:49:31.894 [Pipeline] } 00:49:31.916 [Pipeline] // timeout 00:49:31.923 [Pipeline] } 00:49:31.948 [Pipeline] // stage 00:49:31.955 [Pipeline] } 00:49:31.976 [Pipeline] // catchError 00:49:31.987 [Pipeline] stage 00:49:31.989 [Pipeline] { (Stop VM) 00:49:32.005 [Pipeline] sh 00:49:32.284 + vagrant halt 00:49:35.572 ==> default: Halting domain... 00:49:45.596 [Pipeline] sh 00:49:45.874 + vagrant destroy -f 00:49:49.183 ==> default: Removing domain... 00:49:49.763 [Pipeline] sh 00:49:50.045 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest_2/output 00:49:50.056 [Pipeline] } 00:49:50.077 [Pipeline] // stage 00:49:50.084 [Pipeline] } 00:49:50.103 [Pipeline] // dir 00:49:50.110 [Pipeline] } 00:49:50.130 [Pipeline] // wrap 00:49:50.138 [Pipeline] } 00:49:50.155 [Pipeline] // catchError 00:49:50.166 [Pipeline] stage 00:49:50.169 [Pipeline] { (Epilogue) 00:49:50.189 [Pipeline] sh 00:49:50.537 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:08.643 [Pipeline] catchError 00:50:08.645 [Pipeline] { 00:50:08.661 [Pipeline] sh 00:50:08.945 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:08.945 Artifacts sizes are good 00:50:08.954 [Pipeline] } 00:50:08.973 [Pipeline] // catchError 00:50:08.984 [Pipeline] archiveArtifacts 00:50:08.992 Archiving artifacts 00:50:09.377 [Pipeline] cleanWs 00:50:09.390 [WS-CLEANUP] Deleting project workspace... 00:50:09.390 [WS-CLEANUP] Deferred wipeout is used... 00:50:09.397 [WS-CLEANUP] done 00:50:09.398 [Pipeline] } 00:50:09.417 [Pipeline] // stage 00:50:09.424 [Pipeline] } 00:50:09.444 [Pipeline] // node 00:50:09.450 [Pipeline] End of Pipeline 00:50:09.492 Finished: SUCCESS